AWS 官方推出了自己的 Amazon S3 FUSE 套件

看到「Mountpoint for Amazon S3」這個專案,AWS 自己推出了自己的 Amazon S3 FUSE 套件。Hacker News 上也有一些討論:「Mountpoint – file client for S3 written in Rust, from AWS (github.com/awslabs)」。

Amazon S3 的價錢比其他 AWS 提供的 storage 都便宜不少。以美東第一區 us-east-1 來說,S3 是 $0.023/GB,而 EBS (gp3) 要 $0.08/GB,即使是 EBS (st1) 也要 $0.045/GB。

S3 相較於 EBS 來說,多了 API call 的費用,所以對於不會產生大量 API call 的應用來說 (像是常常會寫很大包的資料到檔案裡),透過 FUSE 操作 Amazon S3 可以讓現有的套裝軟體或是程式直接跑上去。

另外一個常見的應用是讓套裝軟體或是現成的程式可以讀取 S3 的資料。

之前這類應用馬上會想到的專案是 s3fs-fuse,這個專案很久了,大家也都知道多人寫入的部份會是痛點。

這次 AWS 自己出來做的事情有點重工,看起來他想做的事情 s3fs-fuse 都解的差不多了,目前看起來唯一的賣點應該只有 Rust-based,但 s3fs-fuse 主要是 C++,其實也沒差到哪裡:

Mountpoint for Amazon S3 is optimized for read-heavy workloads that need high throughput. It intentionally does not implement the full POSIX specification for file systems.

目前專案還是 alpha release,不確定專案的方向到底是什麼...

Linux 6.2 的 Btrfs 改進

Hacker News 上看到 Btrfs 的改善消息:「Btrfs With Linux 6.2 Bringing Performance Improvements, Better RAID 5/6 Reliability」,對應的討論在「 Btrfs in Linux 6.2 brings performance improvements, better RAID 5/6 reliability (phoronix.com)」這邊。

因為 ext4 本身很成熟了,加上特殊的需求反而會去用 OpenZFS,就很久沒關注 Btrfs 了,這次看到 Btrfs 在 Linux 6.2 上的改進剛好可以重顧一下情況。

看起來是針對 RAID 模式下的改善,包括穩定性與效能,不過看起來是針對 RAID5 的部份多一點。

就目前的「情勢」看起來,Btrfs 之所以還是有繼續被發展,主要還是因為 OpenZFS 的授權條款是 CDDL,與 Linux kernel 用的 GPLv2 不相容,所以得分開維護。

但 OpenZFS 這邊的功能性與成熟度還是比 Btrfs 好不少,以現階段來說,如果架構上可以設計放 OpenZFS 的話應該還是會放 OpenZFS...

這次 Amazon EFS 兩個新推出的項目:Elastic Throughput 與更低的 latency

這次 re:Invent 關於 Amazon EFS 推出來的新東西,目前有看到兩個,第一個是「New – Announcing Amazon EFS Elastic Throughput」,介紹 Elastic Throughput。

傳統的 Busrting Throughput 模式會依照你的使用空間分配對應的速度,基礎是 50MB/sec per TB 計算,但可以 burst 到 100MB/sec per TB:

When burst credits are available, a file system can drive throughput up to 100 MiBps per TiB of storage, up to the Amazon EFS Region's limit, with a minimum of 100 MiBps. If no burst credits are available, a file system can drive up to 50 MiBps per TiB of storage, with a minimum of 1 MiBps.

而 Elastic Throughput 是一種高效能的模式,可以提供 3GB/sec 的讀取速度與 1GB/sec 的寫入速度:

Elastic Throughput allows you to drive throughput up to a limit of 3 GiB/s for read operations and 1 GiB/s for write operations per file system in all Regions.

但這然是有代價的,Elastic Throughput 的計費方式按照傳輸量計算,以 us-east-1 的計價來說,讀取是 $0.03/GB,寫入是 $0.06/GB。

粗粗算了一下,比較適合短時間要很大量快速讀寫的應用。如果是不在意時間的 (像是 cron job) 就不需要 Elastic Throughput... 然後 home 目錄拿來用可能是個不錯的選擇?

第二個推出的項目是不用錢的,是 Amazon EFS 效能的改進,降低 latency:「AWS announces lower latencies for Amazon Elastic File System」。

首先是讀取的效能提昇,以敘述看起來像是加上了 cache 層產生的效能改進:

Amazon EFS now delivers up to 60% lower read operation latencies when working with frequently-accessed data and metadata.

另外是對小檔寫入有做處理:

In addition, EFS now delivers up to 40% lower write operation latencies when working with small files (<64 KB) and metadata.

不過這些改進只有在新的 EFS 才會有,而且這波只有 us-east-1 上:

These enhancements are available automatically for all new EFS file systems using General Purpose mode in the US East (N. Virginia) Region, and will become available in the remaining AWS commercial regions over the coming weeks.

原來有專有名詞:TOCTOU (Time-of-check to time-of-use)

看「The trouble with symbolic links」這篇的時候看到的專有名詞:「TOCTOU (Time-of-check to time-of-use)」,直翻是「先檢查再使用」,算是一個常見的 security (hole) pattern,因為檢查完後有可能被其他人改變,接著使用的時候就有可能產生安全漏洞。

在資料庫這類環境下,有 isolation (ACID 裡的 I) 可以確保不會發生這類問題 (需要 REPEATABLE-READ 或是更高的 isolation level)。

但在檔案系統裡面看起來不太順利,2004 年的時候研究出來沒有 portable 的方式可以確保避免 TOCTOU 的問題發生:

In the context of file system TOCTOU race conditions, the fundamental challenge is ensuring that the file system cannot be changed between two system calls. In 2004, an impossibility result was published, showing that there was no portable, deterministic technique for avoiding TOCTOU race conditions.

其中一種 mitigation 是針對 fd 監控:

Since this impossibility result, libraries for tracking file descriptors and ensuring correctness have been proposed by researchers.

然後另外一種方式 (比較治本) 是檔案系統的 API 支援 transaction,但看起來不被主流接受?

An alternative solution proposed in the research community is for UNIX systems to adopt transactions in the file system or the OS kernel. Transactions provide a concurrency control abstraction for the OS, and can be used to prevent TOCTOU races. While no production UNIX kernel has yet adopted transactions, proof-of-concept research prototypes have been developed for Linux, including the Valor file system and the TxOS kernel. Microsoft Windows has added transactions to its NTFS file system, but Microsoft discourages their use, and has indicated that they may be removed in a future version of Windows.

目前看起來的問題是沒有一個讓 Linux community 能接受的 API 設計?

Git 2.37.0 對巨大 Monorepo 的加速功能 FSMonitor

這邊用 GitHub 寫的說明好了:「Improve Git monorepo performance with a file system monitor」。

從 2.37.0 開始,Windows 與 Mac 版的使用者可以透過 FSMonitor 的功能記錄檔案系統的變化,大幅減少需要 scan 整個 repository 的時間,可以看到啟用後對於像是 chromium 這種大型專案的 status 時間就大幅下降了:

不過 Linux 還沒支援,目前我的環境都是 Linux,就沒辦法用了...

Amazon EFS 效能提昇的一些討論

上一篇「Amazon EFS 的效能提昇」提到 Amazon EFS 的效能提昇,在 Hacker News 上看到 Amazon EFS 團隊的 PMT (Product-Manager-Technical) 出來回一些東西:「Amazon Elastic File System Update – Sub-Millisecond Read Latency (amazon.com)」,搜尋 geertj 應該就可以看到他回的東西了...

像是即使是 Jeff Barr 發表這篇文章,也還是經過 legal team 的同意才能發表:

(PMT on the EFS team).

Yes, the wordings are carefully formulated as they have to be signed off by the AWS legal team for obvious reasons. With that said, this update was driven by profiling real applications and addressing the most common operations, so the benefits are real. For example, a simple WordPress "hello world" is now about 2x as fast as before.

另外這次的效能提昇是透過 cache 層達成的:

I'm the PMT for this project in the EFS team. The "flip the switch" part was indeed one of the harder parts to get right. Happy to share some limited details. The performance improvement builds on a distributed consistent cache. You can enable such a cache in multiple steps. First you deploy the software across the entire stack that supports the caching protocol but it's disabled by configuration. Then you turn it for the multiple components that are involved in the right order. Another thing that was hard to get right was to ensure that there are no performance regressions due to the consistency protocol.

然後在每個 AZ 都有 cache:

The caches are local to each AZ so you get the low latency in each AZ, the other details are different. Unfortunately I can't share additional details at this moment, but we are looking to do a technical update on EFS at some point soon, maybe at a similar venue!

另外看起來主要就是 metadata cache 的幫助:

NFS workloads are typically metadata heavy and highly correlated in time, so you can achieve very high hit rates. I can't share any specific numbers unfortunately.

還是有很多細節數字不能透漏,但知道是透過 cache 達成的就已經可以大致上想像後面是怎麼弄出來的了...

Amazon EFS 的效能提昇

AWS 宣佈他們將 Amazon EFS 的 latency 大幅降低以提昇效能:「Amazon Elastic File System Update – Sub-Millisecond Read Latency」。

Linux 上一般是用 NFS 掛 EFS,個位數的 ms 的確對於效能影響超大,現在宣稱讀取的部份降到 0.6ms,應該會有蠻明顯的感覺:

Up until today, EFS latency for read operations (both data and metadata) was typically in the low single-digit milliseconds. Effective today, new and existing EFS file systems now provide average latency as low as 600 microseconds for the majority of read operations on data and metadata.

然後不另外收費:

This performance boost applies to One Zone and Standard General Purpose EFS file systems. New or old, you will still get the same availability, durability, scalability, and strong read-after-write consistency that you have come to expect from EFS, at no additional cost and with no configuration changes.

另外就是過去幾個禮拜他們把現有的 EFS 都轉移過去了:

We “flipped the switch” and enabled this performance boost for all existing EFS General Purpose mode file systems over the course of the last few weeks, so you may already have noticed the improvement. Of course, any new file systems that you create will also benefit.

不過 EFS 另外一個問題就是貴炸,用錢換方便...

Amazon EFS 提供 Replication 功能

Jeff Barr 在官方 blog 上宣佈 Amazon EFS 提供 replication 功能:「New – Replication for Amazon Elastic File System (EFS)」。

可以看到跨區的設定畫面:

在建起來以後會是 read-only filesystem:

另外有提供 fail-over 機制,當 fail-over 過去後會從 read-only 變成 read-write。

不過要注意,架構上屬於 eventually consistent,預期是一分鐘內會更新。這點算是可以預期的,不然 latency 會太高:

All replication traffic stays on the AWS global backbone, and most changes are replicated within a minute, with an overall Recovery Point Objective (RPO) of 15 minutes for most file systems.

然後 replication 不會計算到 I/O 的 credit 與 throughput,算是比較特別的一點:

Replication does not consume any burst credits and it does not count against the provisioned throughput of the file system.

replication 這個服務本身不另外收費,只收取 EFS 使用的空間以及 replication 產生的頻寬費用:

You pay the usual storage fees for the original and replica file systems and any applicable cross-region or intra-region data transfer charges.

在 ZFS 上跑 PostgreSQL 的調校

在「Everything I've seen on optimizing Postgres on ZFS」這邊看到如果要在 ZFS 上面跑 PostgreSQL 時的調校方式,看起來作者有一直在更新這篇,所以需要的時候可以跑去看...

主要的族群是要搞 self-hosted PostgreSQL 的人,相較於 ext4 或是 XFS,底層如果使用 ZFS 可以做許多事情,像是 compression 與 snapshot,這對於很多 DBA 相關的操作會方便不少,但也因為 ZFS 的關係,兩邊 (& PostgreSQL) 需要一起調整以確保效能...

不過短期應該還是用 RDS 就是了...

MySQL 跑在 ZFS 與 ext4 的效能差異

Percona 的「MySQL/ZFS Performance Update」這篇又對 ZFS 做了一次測試,算是用比較新的軟體跑出來的結果,不過要注意這邊的 ZFS 版本仍然不是目前最新版:

ZFS 0.8.6-1 is not bleeding edge, there have been more than 1700 commits since and after 0.8.6, the ZFS release number jumped to 2.0. The big addition included in the 2.0 release is native encryption.

機器是在雲端上 (Azure 上),不熟悉 Azure 的機種,但看記憶體與 CPU 的量好像不是用頂規的機器:

benchmark host
Standard D2ds_v4 instance
2 vCpu, 8GB of Ram and 75 GB of temporary storage
Debian Buster

Database host
Standard E4-2ds-v4 instance
2 vCpu, 32GB of Ram and 150GB of temporary storage
256GB SSD Premium (SSD Premium LRS P15 – 1100 IOPS (3500 burst), 125 MB/s)
Debian Buster
Percona server 8.0.22-13

跑出來的結果看起來不差:

看了一下測試用的設定,似乎只測了 compression 的部份,沒測 snapshot 以及其他功能會對效能有什麼影響,但至少基本盤應該是還不錯?