Amazon EBS 在 Compliance mode 下的 Snapshot Lock

Jeff Barr 寫了「New – Amazon EBS Snapshot Lock」這篇,介紹 Amazon EBS 的新功能 Snapshot Lock。

從名字就知道是鎖住 snapshot 不讓人刪除,比較特別的是有兩個模式,第一個是 Governance,這個模式下就只是防止誤刪除的情況:

This mode protects snapshots from deletions by all users. However, with the proper IAM permissions, the lock duration can be extended or shortened, the lock can be deleted, and the mode can be changed from Governance mode to Compliance mode.

比較重要的是第二個模式 Compliance,在超過猶豫期 (cooling-off period) 後就不能動了,就算你有最大的權限 (我猜是連 root account 也不能動),唯一能操作的只有延長 lock 時間:

This mode protects snapshots from actions by the root user and all IAM users. After a cooling-off period of up to 72 hours, neither the snapshot nor the lock can be deleted until the lock duration expires, and the mode cannot be changed. With the proper IAM permissions the lock duration can be extended, but it cannot be shortened.

的確是遵循法規用的功能...

Amazon EBS 十五週年,以及一些數據

AWS 的 SVP James Hamilton 寫了一篇「Amazon Elastic Block Store at 15 Years」在講 Amazon EBS 的十五週年,裡面提到了一些數字。

目前的每天的 IOPS 是 100 trillion,如果攤平的話大約是 11.57 billion IOPS/sec,如果很單純以目前高階 NVMe 卡大約是 1M IOPS/sec 這個數量級來算的話,在沒有任何 redundancy 架構,需要的量大約是萬張?以 AWS 的量感覺好像是個合理的數字... 考慮到 IOPS 主力應該是 SSD 或 NVMe 類的應用,加上 redundancy 以及保留 burst 空間的架構,最少有個十萬張... 應該不算有問題。

I asked the EBS team to quantify customer usage in 2023, the 15th year of EBS. Focusing first on daily usage, EBS delivers more than 100 trillion input/output operations per day.

另外一個是傳輸量,每天有 13EB,攤平大約是 150.46TB/sec,如果用上面提到的十萬顆來攤的話大約是需要 1.5GB/sec 的速度,拿數量級來算應該是差不多。

Perhaps even more staggering is the fact that EBS transfers more than 13 exabytes of data for customers every day.

另外一個是百萬客戶 (也許是帳號) 每天會開出三億個 EBS storage,我猜這跟機器的起起落落有關,現在 EC2 開機主要都是要掛 EBS 的 boot disk 了:

Continuing to focus on daily usage, millions of customers use EBS daily, and these millions of customers create more than 390 million EBS storage volumes each day.

的確如同 James Hamilton 說的,EBS 現在已經變成一個蠻重要的基礎建設了,很多 AWS 上的服務都是架在他上面,像是 RDS 利用了 EBS 的 block replication 組出了 readonly repica,而非走傳統的 replication 路子。

GCP 的 Disks 與 AWS 的 EBS 的比較...

下午在升級 GCP 上面的跳板機的時候,發現機器用的是 Standard Persistent Disk (Standard PD),這是個 HDD 架構,跑起來超慢,研究了一下發現 AWS 與 GCP 兩邊的差異其實有點大,整理一下...

價錢的部分,AWS 的部分拿東京區 (ap-northeast-1) 的價錢來看,GCP 則是拿台灣區 (asia-east1) 來看。

先看 SSD 的部分:

AWS 最常用的 gp3 是 $0.096/GB,無論空間大小,效能上都提供 3000 IOPS 與 125MB/sec throughput,另外可以加價購買 IOPS 與 throughput。不過也因為這個性質,拿來當開機碟很好用。

早期的 gp2 則是 $0.12/GB,效能上提供 3 IOPS/GB,但最低會給 100 IOPS,所以當開機碟也還可以,不會到太慢。

GCP 如果是 Balanced Persistent Disk (Balanced PD) 是 $0.1/GB,效能上會提供 6 Read IOPS/GB + 6 Write IOPS/GB + 0.28MB/sec/GB throughput;以 10GB 的 disk 來說會是 60 Read IOPS + 60 Write IOPS + 2.8MB/sec throughput。

如果是 SSD Persistent Disk (SSD PD) 是 $0.17/GB,效能上是 30 Read IOPS/GB + 30 Write IOPS/GB + 0.48MB/sec/GB throughput;以 10GB 的 disk 來說會是 300 Read IOPS + 300 Write IOPS + 28MB/sec throughput。

再來是 HDD 的部分:

AWS 這邊代號是 standard,價錢是 $0.08/GB,另外 IOPS 每 1M 個 IOPS 也要收 $0.08,如果是拿來開機的話還好,但如果是有應用在上面操 IOPS 的話就不太便宜了。

GCP 這邊是 Standard Persistent Disk (Standard PD),價錢是 $0.04/GB,效能上提供 0.75/GB Read IOPS + 1.5/GB Write IOPS + 0.12MB/sec/GB throughput;以 10GB 的 disk 來說會是 7.5 Read IOPS + 15 Write IOPS + 1.2MB/sec throughput。

所以如果是不太在意效能的情況下要找 C/P 值 (但也不到完全不在意?),在 AWS 上用 standard 就不太划算,畢竟多一些些費用就可以用 gp3,對效能提升巨大;但在 GCP 上就會想用 Standard PD,從單價可以看到差了蠻多...

這次 Amazon EFS 兩個新推出的項目:Elastic Throughput 與更低的 latency

這次 re:Invent 關於 Amazon EFS 推出來的新東西,目前有看到兩個,第一個是「New – Announcing Amazon EFS Elastic Throughput」,介紹 Elastic Throughput。

傳統的 Busrting Throughput 模式會依照你的使用空間分配對應的速度,基礎是 50MB/sec per TB 計算,但可以 burst 到 100MB/sec per TB:

When burst credits are available, a file system can drive throughput up to 100 MiBps per TiB of storage, up to the Amazon EFS Region's limit, with a minimum of 100 MiBps. If no burst credits are available, a file system can drive up to 50 MiBps per TiB of storage, with a minimum of 1 MiBps.

而 Elastic Throughput 是一種高效能的模式,可以提供 3GB/sec 的讀取速度與 1GB/sec 的寫入速度:

Elastic Throughput allows you to drive throughput up to a limit of 3 GiB/s for read operations and 1 GiB/s for write operations per file system in all Regions.

但這然是有代價的,Elastic Throughput 的計費方式按照傳輸量計算,以 us-east-1 的計價來說,讀取是 $0.03/GB,寫入是 $0.06/GB。

粗粗算了一下,比較適合短時間要很大量快速讀寫的應用。如果是不在意時間的 (像是 cron job) 就不需要 Elastic Throughput... 然後 home 目錄拿來用可能是個不錯的選擇?

第二個推出的項目是不用錢的,是 Amazon EFS 效能的改進,降低 latency:「AWS announces lower latencies for Amazon Elastic File System」。

首先是讀取的效能提昇,以敘述看起來像是加上了 cache 層產生的效能改進:

Amazon EFS now delivers up to 60% lower read operation latencies when working with frequently-accessed data and metadata.

另外是對小檔寫入有做處理:

In addition, EFS now delivers up to 40% lower write operation latencies when working with small files (<64 KB) and metadata.

不過這些改進只有在新的 EFS 才會有,而且這波只有 us-east-1 上:

These enhancements are available automatically for all new EFS file systems using General Purpose mode in the US East (N. Virginia) Region, and will become available in the remaining AWS commercial regions over the coming weeks.

AWS 台北區的網路狀況 (Routing & CDN 的情況)

在「目前 AWS 台北區只能開 *.2xlarge 的機器」這邊把機器開起來了,所以先測一下 AWS 台北區對台灣各家的 ISP 的網路狀況。

先看台灣內的點,看起來都有 peering,用 IP 測可以看到 latency 都很低:

再來試看海外 internet 的部份,美國蠻多點是從東京 AWS 過去,但測了香港的部份 www.three.com.hk,是從 TPIX 換出去,看起來台灣這邊也有一些出口,peering 與 transit 目前沒看到大問題。

但幾乎所有透過 GeoDNS-based 的查詢都會被丟到東京:

走 anycast 的 Cloudflare 就好不少,像是付費版本的 www.plurk.com 就是台北的 PoP,而免費版本的 wiki.gslin.org 也會丟到亞洲的某個點上?(看不出來是不是東京,出現 jtha 這個有點像是日本,但也有可能是泰國的點?)

這應該主要還是因為這段 IP 目前還是被認到東京的 ap-northeast-1 上,得等各家調整才有機會放到台灣的 PoP 上,不然就是要故意用沒有 EDNS Client Subnet 的 DNS resolver 了。

Amazon EFS 效能提昇的一些討論

上一篇「Amazon EFS 的效能提昇」提到 Amazon EFS 的效能提昇,在 Hacker News 上看到 Amazon EFS 團隊的 PMT (Product-Manager-Technical) 出來回一些東西:「Amazon Elastic File System Update – Sub-Millisecond Read Latency (amazon.com)」,搜尋 geertj 應該就可以看到他回的東西了...

像是即使是 Jeff Barr 發表這篇文章,也還是經過 legal team 的同意才能發表:

(PMT on the EFS team).

Yes, the wordings are carefully formulated as they have to be signed off by the AWS legal team for obvious reasons. With that said, this update was driven by profiling real applications and addressing the most common operations, so the benefits are real. For example, a simple WordPress "hello world" is now about 2x as fast as before.

另外這次的效能提昇是透過 cache 層達成的:

I'm the PMT for this project in the EFS team. The "flip the switch" part was indeed one of the harder parts to get right. Happy to share some limited details. The performance improvement builds on a distributed consistent cache. You can enable such a cache in multiple steps. First you deploy the software across the entire stack that supports the caching protocol but it's disabled by configuration. Then you turn it for the multiple components that are involved in the right order. Another thing that was hard to get right was to ensure that there are no performance regressions due to the consistency protocol.

然後在每個 AZ 都有 cache:

The caches are local to each AZ so you get the low latency in each AZ, the other details are different. Unfortunately I can't share additional details at this moment, but we are looking to do a technical update on EFS at some point soon, maybe at a similar venue!

另外看起來主要就是 metadata cache 的幫助:

NFS workloads are typically metadata heavy and highly correlated in time, so you can achieve very high hit rates. I can't share any specific numbers unfortunately.

還是有很多細節數字不能透漏,但知道是透過 cache 達成的就已經可以大致上想像後面是怎麼弄出來的了...

AWS App Runner 總算可以存取 VPC 內的資源了

算是上個星期的消息了,App Runner 這個產品剛出來的時候無法連到 VPC 內的資源,不知道要怎麼用,現在總算是把這個功能補上了:「New for App Runner – VPC Support」。

不過還是不看好,旁邊還有 AWS Elastic BeanstalkAWS Amplify 同質性超高的服務,都是只寫 code 丟上去就能跑:

AWS App Runner is a fully managed service that makes it easy for developers to quickly deploy containerized web applications and APIs, at scale and with no prior infrastructure experience required. Start with your source code or a container image. App Runner builds and deploys the web application automatically, load balances traffic with encryption, scales to meet your traffic needs, and makes it easy for your services to communicate with other AWS services and applications that run in a private Amazon VPC. With App Runner, rather than thinking about servers or scaling, you have more time to focus on your applications.

AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.

You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.

AWS Amplify is a set of purpose-built tools and features that lets frontend web and mobile developers quickly and easily build full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as your use cases evolve. With Amplify, you can configure a web or mobile app backend, connect your app in minutes, visually build a web frontend UI, and easily manage app content outside the AWS console. Ship faster and scale effortlessly—with no cloud expertise needed.

更不用說旁邊還有 Lambda 類的架構...

Amazon EC2 上的一些小常識

Twitter 上看到 Laravel News 轉發了「Mistakes I've Made in AWS」這篇,講 Amazon EC2 上面的一些小常識。

在 EC2 中,T 系列的機器 (目前主要是 t2/t3/t3a/t4g) 對於開發很好用,甚至對於量還不大的 production system 也很好用,加上 Unlimited 模式可以讓你在 CPU credit 用完時付錢繼續 burst。

文章裡面有討論到,使用 T 系列機器時,常常是不怎麼需要大量 CPU 資源的情境,這時候 AMD-based 的 t3a 通常都是個還不錯的選擇,大概會比 Intel-based 的 t3 省 10% 的費用。另外如果可以接受 ARM-based 的話,t4g 也是個選項,價錢會更便宜而且在很多應用下速度會更快。不過同事有遇到 Python 上面跑起來的行為跟 x86-64-based 的不同,這點就得自己琢磨了...

另外就是目前的 EBS 預設還是會使用 gp2,而在 gp3 出來後其實大多數的情況下應該可以換過去,主要就是便宜了 20%,加上固定的 3000 IOPS。

不過也是有些情境下是不應該換的,主要是 gp2 可以 burst 到 250MB/sec,但 gp3 只給了 125MB/sec。雖然 gp3 可以加價買 throughput,但加價的費用不低,這種需求改用 gp2 應該會比較划算。

不過這邊推薦比較技術的作法,可以掛兩個 gp3 (也可以更多) 跑 RAID0 (像是在 Linux 上可以透過 mdadm 操作),這樣 IOPS 與 throughput 都應該可以拉上來...

MySQL 在不同種類 EBS 上的效能

Percona 的人寫了一篇關於 MySQL 跑在 AWS 上不同種類 EBS 的效能差異:「Performance of Various EBS Storage Types in AWS」,不過這篇的描述部份不是很專業,重點是直接看測試資料建立自己的理解。

他的方法是在 AWS 上建立了相同參數的 gp2gp3io1io2 空間,都是 1TB 與 3000 IOPS,但他提到這應該會一樣:

So, all the volumes are 1TB with 3000 iops, so in theory, they are the same.

但這在「Amazon EBS volume types」文件上其實都有提過了,先不管 durability 的部份,光是與效能有關的規格就不一樣了。

在 gp2 的部份直接有提到只有保證 99% 的時間可以達到宣稱的效能:

AWS designs gp2 volumes to deliver their provisioned performance 99% of the time.

而 gp3 則是只用行銷宣稱「consistent baseline rate」,連 99% 都不保證:

These volumes deliver a consistent baseline rate of 3,000 IOPS and 125 MiB/s, included with the price of storage.

io* 的部份則是保證 99.9%:

Provisioned IOPS SSD volumes use a consistent IOPS rate, which you specify when you create the volume, and Amazon EBS delivers the provisioned performance 99.9 percent of the time.

另外在測試中 gp2gp3 的 throughput 看起來也沒調整成一樣的數字。在 1TB 的 gp2 中會給 250MB/sec 的速度,1TB 的 gp3 則是給 125MB/sec,除非你有加買 throughput。

另外從這句也可以看出來他對 AWS 不熟:

The tests were only run in a single availability zone (eu-west-1a).

在「AZ IDs for your AWS resources」這邊有提過不同帳號之間,同樣代碼的 AZ 不一定是一樣的區域,需要看 AZ ID:

For example, the Availability Zone us-east-1a for your AWS account might not have the same location as us-east-1a for another AWS account.

To identify the location of your resources relative to your accounts, you must use the AZ ID, which is a unique and consistent identifier for an Availability Zone. For example, use1-az1 is an AZ ID for the us-east-1 Region and it is the same location in every AWS account.

在考慮到只有設定大小與 IOPS 的情況下,剩下的測試結果其實跟預期的差不多:io2 貴但是可以得到最好的效能,io1 的品質會差一些,gp3 在大多數的情況下其實很夠用,但要注意預設的 throughput 沒有 gp2 高。