Amazon SQS 提高 FIFO throughput 限制

在「Amazon SQS announces increased throughput quota for FIFO High Throughput mode」這邊看到 AWS 提高了 Amazon SQS 中 FIFO throughput 的限制,這本來是個常常有的公告,但讓我意外的是不同區域拉高的數量是不同的:

Amazon Simple Queue Service (SQS) announces an increased quota for a high throughput mode for FIFO queues, allowing you to process up to 9,000 transactions per second, per API action in US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt) regions. For Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) regions, the throughput quota has been increased to 4,500 transactions per second, per API action. For all other regions where SQS is generally available today, the quota for high throughput mode quota has been increased to 2,400 transactions per second.

第一梯隊的 (像是 us-east-1us-west-2eu-west-1) 都是 9000 tps,而第二梯隊是 4500 tps,沒列在上面的區域是 2400 tps。

另外一個比較特別的是 Frankfurt 區居然在第一梯隊...

這次 Amazon EFS 兩個新推出的項目:Elastic Throughput 與更低的 latency

這次 re:Invent 關於 Amazon EFS 推出來的新東西,目前有看到兩個,第一個是「New – Announcing Amazon EFS Elastic Throughput」,介紹 Elastic Throughput。

傳統的 Busrting Throughput 模式會依照你的使用空間分配對應的速度,基礎是 50MB/sec per TB 計算,但可以 burst 到 100MB/sec per TB:

When burst credits are available, a file system can drive throughput up to 100 MiBps per TiB of storage, up to the Amazon EFS Region's limit, with a minimum of 100 MiBps. If no burst credits are available, a file system can drive up to 50 MiBps per TiB of storage, with a minimum of 1 MiBps.

而 Elastic Throughput 是一種高效能的模式,可以提供 3GB/sec 的讀取速度與 1GB/sec 的寫入速度:

Elastic Throughput allows you to drive throughput up to a limit of 3 GiB/s for read operations and 1 GiB/s for write operations per file system in all Regions.

但這然是有代價的,Elastic Throughput 的計費方式按照傳輸量計算,以 us-east-1 的計價來說,讀取是 $0.03/GB,寫入是 $0.06/GB。

粗粗算了一下,比較適合短時間要很大量快速讀寫的應用。如果是不在意時間的 (像是 cron job) 就不需要 Elastic Throughput... 然後 home 目錄拿來用可能是個不錯的選擇?

第二個推出的項目是不用錢的,是 Amazon EFS 效能的改進,降低 latency:「AWS announces lower latencies for Amazon Elastic File System」。

首先是讀取的效能提昇,以敘述看起來像是加上了 cache 層產生的效能改進:

Amazon EFS now delivers up to 60% lower read operation latencies when working with frequently-accessed data and metadata.

另外是對小檔寫入有做處理:

In addition, EFS now delivers up to 40% lower write operation latencies when working with small files (<64 KB) and metadata.

不過這些改進只有在新的 EFS 才會有,而且這波只有 us-east-1 上:

These enhancements are available automatically for all new EFS file systems using General Purpose mode in the US East (N. Virginia) Region, and will become available in the remaining AWS commercial regions over the coming weeks.

MySQL 在不同種類 EBS 上的效能

Percona 的人寫了一篇關於 MySQL 跑在 AWS 上不同種類 EBS 的效能差異:「Performance of Various EBS Storage Types in AWS」,不過這篇的描述部份不是很專業,重點是直接看測試資料建立自己的理解。

他的方法是在 AWS 上建立了相同參數的 gp2gp3io1io2 空間,都是 1TB 與 3000 IOPS,但他提到這應該會一樣:

So, all the volumes are 1TB with 3000 iops, so in theory, they are the same.

但這在「Amazon EBS volume types」文件上其實都有提過了,先不管 durability 的部份,光是與效能有關的規格就不一樣了。

在 gp2 的部份直接有提到只有保證 99% 的時間可以達到宣稱的效能:

AWS designs gp2 volumes to deliver their provisioned performance 99% of the time.

而 gp3 則是只用行銷宣稱「consistent baseline rate」,連 99% 都不保證:

These volumes deliver a consistent baseline rate of 3,000 IOPS and 125 MiB/s, included with the price of storage.

io* 的部份則是保證 99.9%:

Provisioned IOPS SSD volumes use a consistent IOPS rate, which you specify when you create the volume, and Amazon EBS delivers the provisioned performance 99.9 percent of the time.

另外在測試中 gp2gp3 的 throughput 看起來也沒調整成一樣的數字。在 1TB 的 gp2 中會給 250MB/sec 的速度,1TB 的 gp3 則是給 125MB/sec,除非你有加買 throughput。

另外從這句也可以看出來他對 AWS 不熟:

The tests were only run in a single availability zone (eu-west-1a).

在「AZ IDs for your AWS resources」這邊有提過不同帳號之間,同樣代碼的 AZ 不一定是一樣的區域,需要看 AZ ID:

For example, the Availability Zone us-east-1a for your AWS account might not have the same location as us-east-1a for another AWS account.

To identify the location of your resources relative to your accounts, you must use the AZ ID, which is a unique and consistent identifier for an Availability Zone. For example, use1-az1 is an AZ ID for the us-east-1 Region and it is the same location in every AWS account.

在考慮到只有設定大小與 IOPS 的情況下,剩下的測試結果其實跟預期的差不多:io2 貴但是可以得到最好的效能,io1 的品質會差一些,gp3 在大多數的情況下其實很夠用,但要注意預設的 throughput 沒有 gp2 高。

Amazon EBS 的 io2 給了不少新消息...

Amazon EBS 的另外一個新推出的東西,是針對 io2 的改善:

前面兩則消息可以一起看,主要是推出了 EBS Block Express,有著效能上的提昇:

Built on our new EBS Block Express architecture that takes advantage of some advanced communication protocols implemented as part of the AWS Nitro System, the volumes will give you up to 256K IOPS & 4000 MBps of throughput and a maximum volume size of 64 TiB, all with sub-millisecond, low-variance I/O latency. Throughput scales proportionally at 0.256 MB/second per provisioned IOPS, up to a maximum of 4000 MBps per volume. You can provision 1000 IOPS per GiB of storage, twice as many as before. The increased volume size & higher throughput means that you will no longer need to stripe multiple EBS volumes together, reducing complexity and management overhead.

目前因為是 preview 階段,想要用的人需要申請測試。要注意目前支援的區域有限 (不像這次推出 gp3 的時候就是全區),而且需要搭配 r5b 的機器:

The preview is currently available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Europe (Frankfurt) Regions. During the preview, we support the use of R5b instances, with support for other Nitro-powered instances in the works.

第三則消息則是在講 io2 的 IOPS 的折扣,針對購買 32K IOPS 以上的部份會有 30% 折扣:

Now, with the new tiered pricing structure, the first 32,000 IOPS provisioned on a volume are charged at the current base rate ($0.065 per provisioned IOPS-mo) and the second tier between 32,001 and 64,000 is charged at a 30% lower rate ($0.046 per provisioned IOPS-mo).

針對前面提到的 preview 版本 (EBS Block Express),因為可以超過 64K IOPS,這個部份的價錢會更低,再疊一次 30% 的折扣:

Furthermore, for customers who have even higher performance requirement than currently supported by a single io2 volume today, we are previewing io2 volumes that run on EBS Block Express, the next generation of our block storage architecture. io2 Block Express volumes can be provisioned to deliver peak IOPS of 256,000. For these volume, any IOPS provisioned over 64,000 IOPS will be charged at a further 30% lower rate than the second tier ($0.032 per provisioned IOP-mo for IOPS over 64,000). This lowers the effective rate to $0.038 per provisioned IOPS on a volume provisioned with 256,000 IOPS.

算是要衝效能的人用的,目前平常應該還是會用 gp2 或是 gp3 的 SSD...

Amazon EBS 推出了 gp3

今年的 AWS re:Invent 又開始了,不過因為疫情的關係,這次是線上為主... 這邊先來整理一下 Amazon EBS 相關的更新。

首先是推出了新的 gp3 類型,也是 SSD 類:「New – Amazon EBS gp3 Volume Lets You Provision Performance Apart From Capacity」。

每 GB 單位成本比 gp2 低 20%:

Today I would like to tell you about gp3, a new type of SSD EBS volume that lets you provision performance independent of storage capacity, and offers a 20% lower price than existing gp2 volume types.

然後直接給你 3000 IOPS 與 125MB/sec,有需要更高的話可以「加購」:

gp3 is designed to provide predictable 3,000 IOPS baseline performance and 125 MiB/s regardless of volume size. It is ideal for applications that require high performance at a low cost such as MySQL, Cassandra, virtual desktops and Hadoop analytics. Customers looking for higher performance can scale up to 16,000 IOPS and 1,000 MiB/s for an additional fee. The top performance of gp3 is 4 times faster than max throughput of gp2 volumes.

但照「Amazon EBS volume types」這邊的列表可以看到,要注意 gp2 可以 burst 的 throughput (250MB/sec) 比 gp3 的 baseline (125MB/sec) 高。

也因為這樣,可以把一些 random access 比較多的 /data 這類的 EBS 換過去,但如果是要大量 sequential access 的也許就不適合了。

IOPS 的部份,1TB 以下的 gp2 換過去應該是沒什麼太大問題,因為在 gp2 的時候是 1GB 給 3IOPS,所以 1TB 以下的 gp2 都低於 3000IOPS。

轉移的部份可以在 AWS 的 console 上直接 migrate 到 gp3

If you’re currently using gp2, you can easily migrate your EBS volumes to gp3 using Amazon EBS Elastic Volumes, an existing feature of Amazon EBS. Elastic Volumes allows you to modify the volume type, IOPS, and throughput of your existing EBS volumes without interrupting your Amazon EC2 instances.

像是這樣:

但照「Amazon EBS volume types」這邊的列表,gp3 可以是開機硬碟,但是改不過去啊 XDDD

Update:剛剛發現文件被修正了,看起來不能當開機硬碟...

不知道哪邊搞錯了,過幾天看看吧 XDDD

AWS 宣佈提昇 Amazon EFS 的最低效率

AWS 宣佈提昇 Amazon EFS 的最低效率:「Amazon Elastic File System increases file system minimum throughput」。

第一段裡的幾個數字差不多就是重點了:

Amazon Elastic File System (Amazon EFS) file systems using the default bursting throughput mode now have a minimum throughput of 1 MiB/s. All EFS bursting mode file systems (regardless of size) can drive 100 MiB/s of throughput, and file systems with more than 1TiB of Standard class storage can drive 100 MiB/s per TB when burst credits are available. This change increases the minimum throughput from 50KiB/s per GiB of Standard class storage to a fixed minimum of 1 MiB/s for file systems with less than 20 GiB of Standard class storage, when burst credits are exhausted.

本來最低保證效率是每 GB 提供 50KB/sec,也就是要使用到 20GB 才會提供 1MB/sec,現在對於不到 20GB 的使用者,直接拉高到固定 1MB/sec。

這對於剛開始用的使用者會方便一些,不過 EFS 主要還是方便在不同機器上共享,效率上還是本機掛 EBS 好很多 (因為 OS 可以 cache)。

先前在 AWS 上把 /home 丟到 EFS 上面,結果因為 i/o 都需要透過網路的關係,編 pyenv 超慢,後來找一天把東西都丟回 EBS 上,速度快多了...

用 Machine Learning 調校資料庫

AWS AI Blog 在月初上放出來的消息:「Tuning Your DBMS Automatically with Machine Learning」。

Carnegie Mellon Database Group 做的研究,除了預設值以外,另外跟四種不同的參數做比較,分別是 OtterTune (也就是這次的研究)、Tuning script (對於不熟資料庫的人,常用的 open source 工具)、DBA 手動調整,以及 RDS

MySQL

PostgreSQL

比較明顯的結論是:

  • Default 值在所有的 case 下都是最差的 (無論是 MySQL 與 PostgreSQL 平台,以及包括 99% 的 Latency 與 QPS,這樣二乘二的四個結果)。而且 Default 跑出來的數字與其他的差距都很明顯。
  • OtterTune 在所有 case 下跑出來都比 Tuning script 的好。這也是合理的結果,本來就是想要取代其他機器跑出來的結果。

至於有些討論 DBA 會失業的事情,我是樂見其成啦... 這些繁瑣的事情可以自動化就想交給自動化吧 XD

Amazon EBS 推出新磁碟種類

Amazon EBS 推出了新的磁碟種類,都是比現在更經濟 (白話文:更便宜) 的方案:「Amazon EBS Update – New Cold Storage and Throughput Options」。

第一種是 Amazon EBS Throughput Optimized HDD,代號是 st1;第二種是 Amazon EBS Cold HDD,代號是 sc1,兩種都是傳統磁頭硬碟。

第一種 st1 重視 sequential 的 throughput:

Starts at 250 MB/s for a 1 terabyte volume, and grows by 250 MB/s for every additional provisioned terabyte until reaching a maximum burst throughput of 500 MB/s.

第二種 sc1 則是重視堆資料的費用:

Designed for workloads similar to those for Throughput Optimized HDD that are accessed less frequently; $0.025 / gigabyte / month.

要注意的是,IOPS 是可以累計的,而未滿 1MB 的 access 會計算成 1MB,所以只適合大量 sequential access 的應用,像是 Hadoop 這類 big data 類的應用:

For both of the new magnetic volume types, the burst credit bucket can grow until it reaches the size of the volume. In other words, when a volume’s bucket is full, you can scan the entire volume at the burst rate. Each I/O request of 1 megabyte or less counts as 1 megabyte’s worth of credit. Sequential I/O operations are merged into larger ones where possible; this can increase throughput and maximizes the value of the burst credit bucket (to learn more about how the bucket operates, visit the Performance Burst Details section of my New SSD-Backed Elastic Block Storage post).

另外 sc1 也是目前每單位裡面最便宜的價錢,不知道拿來當 root 會底多慢 XDDD