Mountpoint for Amazon S3 正式推出 (GA) 了...

三月的時候提到 AWS 搞出了自己的 Amazon S3FUSE 實作:「AWS 官方推出了自己的 Amazon S3 FUSE 套件」,現在 GA 了:「Mountpoint for Amazon S3 – Generally Available and Ready for Production Workloads」。

看起來 s3fs-fuse 還是一直有在更新,然後翻了翻好像沒看到兩者的比較... (可能是之前 Mountpoint for Amazon S3 在 alpha 版的關係?)

記得這個功能拿來塞些東西還蠻好用的...

Amazon S3 的新數字

Werner Vogels 寫了一篇在回憶 Amazon S3 的文章:「Building and operating a pretty big storage system called S3」,裡面有個是他這個層級比較容易取得公開權限的資料:

有標注「S3 by the numbers (as of publishing this post).」,所以是 2023 年七月現在的數字。

雖然很明顯的還是避開談總大小,但有提供目前的 S3 object 數量是 280 兆,以及 request 量是每秒 1 億次。

搭配之前公開過的數字 (出自維基百科上的「Amazon S3」條目),上次公佈是在 2021 年三月的時候宣布超過 100 兆,所以過了兩年的時間已經到 280 兆了:

Amazon Web Services introduced Amazon S3 in 2006. Amazon reported it stored more than 100 trillion objects as of March 2021, up from 10 billion objects in October 2007, 14 billion objects in January 2008, 29 billion objects in October 2008, 52 billion objects in March 2009, 64 billion objects in August 2009, 102 billion objects in March 2010, and 2 trillion objects in April 2013.

AWS 官方推出了自己的 Amazon S3 FUSE 套件

看到「Mountpoint for Amazon S3」這個專案,AWS 自己推出了自己的 Amazon S3 FUSE 套件。Hacker News 上也有一些討論:「Mountpoint – file client for S3 written in Rust, from AWS (github.com/awslabs)」。

Amazon S3 的價錢比其他 AWS 提供的 storage 都便宜不少。以美東第一區 us-east-1 來說,S3 是 $0.023/GB,而 EBS (gp3) 要 $0.08/GB,即使是 EBS (st1) 也要 $0.045/GB。

S3 相較於 EBS 來說,多了 API call 的費用,所以對於不會產生大量 API call 的應用來說 (像是常常會寫很大包的資料到檔案裡),透過 FUSE 操作 Amazon S3 可以讓現有的套裝軟體或是程式直接跑上去。

另外一個常見的應用是讓套裝軟體或是現成的程式可以讀取 S3 的資料。

之前這類應用馬上會想到的專案是 s3fs-fuse,這個專案很久了,大家也都知道多人寫入的部份會是痛點。

這次 AWS 自己出來做的事情有點重工,看起來他想做的事情 s3fs-fuse 都解的差不多了,目前看起來唯一的賣點應該只有 Rust-based,但 s3fs-fuse 主要是 C++,其實也沒差到哪裡:

Mountpoint for Amazon S3 is optimized for read-heavy workloads that need high throughput. It intentionally does not implement the full POSIX specification for file systems.

目前專案還是 alpha release,不確定專案的方向到底是什麼...

AWS DataSync 支援 GCP 與 Azure 上的 Storage 上的資料了

AWS DataSync 宣佈支援 GCP 與 Azure 上的 Storage 了:「New for AWS DataSync – Move Data Between AWS and Other Public Locations」,比較特別的是,文章的 URL 有提到這兩家的產品,但在標題上反而就沒提到...

這測的重點就是支援 Google CloudMicrosoft Azure 的 object storage 產品:

Today, we added to DataSync the capability to migrate data between AWS Storage services and either Google Cloud Storage or Microsoft Azure Files.

之前大家都是自己開機器手動搬,現在可以直接付錢 (依照 GB 計費) 用服務搬了,不過要注意網路頻寬的流出部份還是有費用...

Amazon S3 支援 MD5 以外的檢查演算法了

Amazon S3 宣佈支援 MD5 以外的檢查演算法了:「New – Additional Checksum Algorithms for Amazon S3」。

多支援了 SHA-1SHA-256 以及 CRC-32CRC-32C

In particular, you can specify the use of any one of four widely used checksum algorithms (SHA-1, SHA-256, CRC-32, and CRC-32C) when you upload each of your objects to S3.

雖然有拿 cryptographic hash function 來用,但其實是當作 checksum algorithm 在用,拿來檢查檔案正確性的,而不是防中間被竄改之類 (這個部份是靠 HTTPS),本來支援的 MD5 應該算是夠用,只是現在多了不少選擇。

然後全商用區都可以用了:

The four additional checksums are now available in all commercial AWS Regions and you can start using them today at no extra charge.

開 S3 bucket 與 IAM 帳號的工具

看到 Simon Willison 的「s3-credentials: a tool for creating credentials for S3 buckets」這篇,裡面講到了幾件事情。

AWS 上比較好的安全設計是,不同專案之間都有自己的 S3 bucket,然後建立對應的 IAM user,每個 IAM user 只能存取自己的 S3 bucket。

但這個建立過程很煩:

Creating those credentials is surprisingly difficult!

整個建立的過程包括了四個步驟:

  1. 建立 S3 bucket。
  2. 建立 IAM user。
  3. 將 IAM user 掛上對應的 S3 權限。
  4. 建立 IAM user 的 access key。

讓人煩的主要是第三個,那個 JSON format 每次都要翻資料 XD

所以他寫了 s3-credentials 這個套件讓大家用,可以透過 pip 直接安裝起來用。

不過我是偏好用 awscli 的,直接把指令放在 wiki page 上面:「awscli」,需要的時候就 copy & paste 過來執行就可以了。在公司的 wiki 上還有直接把 EC2 instance 生到對應的 subnet 指令可以用...

這東西大概不會簡化,只能大家自己找出路 XD

Cloudflare R2 Storage 的插曲...

Hacker News 首頁上看到「Cloudflare's Disruption (stratechery.com)」這篇,文章「Cloudflare’s Disruption」這篇其實還好,主要就是分析一下 Cloudflare R2 Storage 在下的棋,真的讓我想寫的是反而是 Hacker News 上的討論...

首先是提到了 S3 -> R2 -> Q1 -> P0 這個:

ksec 36 minutes ago | unvote [–]

^gt; The service will be called R2 — “one less than S3,” quipped Cloudflare CEO Matthew Prince in an interview with Protocol ahead of Cloudflare’s announcement

Oh I never thought of that. So the next one is Q1 and final one would be P0.

另外下面有也提到 IBMHAL

piaste 33 minutes ago | unvote [–]

And it is likely inspired by the old joke that 2001: A Space Odyssey's HAL was one less than "IBM".

下一個 Q1 是明年了,來看看 2022Q1 會不會有 P0 issue XDDD

Cloudflare 推出 Cloudflare R2 Storage,相容於 S3 API,但沒有傳輸費用

Cloudflare 宣佈了 Cloudflare R2 Storage,相容於 S3 API,但是沒有傳輸費用:「Announcing Cloudflare R2 Storage: Rapid and Reliable Object Storage, minus the egress fees」,Hacker News 上的「Cloudflare R2 storage: Rapid and reliable object storage, minus the egress fees (cloudflare.com)」可以看一下討論,裡面有負責 R2 的 PM (帳號是 greg-m) 回答一些東西。

R2 的第一個特點就是剛剛提到的傳輸費用:一般的雲端都是傳進去不用錢,但傳出來會很貴,而 R2 其中一個主打的點就是傳出來不用錢:

R2 builds on Cloudflare’s commitment to the Bandwidth Alliance, providing zero-cost egress for stored objects — no matter your request rate. Egress bandwidth is often the largest charge for developers utilizing object storage and is also the hardest charge to predict. Eliminating it is a huge win for open-access to data stored in the cloud.

另外 storage cost 也算低,S3 目前的費用是 US$0.023/GB/month (拿 us-east-1 相比),而 R2 目前的定價是 US$0.015/GB/month:

That doesn’t mean we are shifting bandwidth costs elsewhere. Cloudflare R2 will be priced at $0.015 per GB of data stored per month — significantly cheaper than major incumbent providers.

在 durability 的部份,與 S3 都是一年 11 個 9:

The core of what makes Object Storage great is reliability — we designed R2 for data durability and resilience at its core. R2 will provide 99.999999999% (eleven 9’s) of annual durability, which describes the likelihood of data loss.

目前還沒有公開,算是先對市場放話:

R2 is currently under development — you can sign up here to join the waitlist for access.

有幾個點還蠻有趣的,第一個是 Cloudflare 自己在推的 Bandwidth Alliance 裡有不少 VPS 跟 Cloudflare 之間的流量是不計頻寬費用的,所以等於是 VPS 到 R2 不計費,而 R2 到 VPS 也不計費,但要注意 VPS 自己也都有在推 object storage。

像是 Vultr 的 US$5 方案包括了 250GB 的空間與 1TB 的頻寬,扣掉頻寬的部份 (可以透過 Cloudflare 處理),相當於是 US$0.02/GB。

Linode 也類似,US$5 的方案包括了 250GB 的空間與 500GB 的頻寬,算出來也是 US$0.02/GB。

Backblaze 也有類似的產品 B2,US$0.005/GB/month 的儲存費用以及 $0.01/GB 的傳輸費用,但頻寬的部份也可以透過 Cloudflare 處理。

這個產品出來以後可以再看看如何,但看起來是蠻有趣的。對目前的雲端商應該還好 (因為資料進 R2 還是有費用),但對這些 VPS 來說應該是有蠻大的衝擊...

Amazon S3 變成 Strong Consistency 背後的改善方式

看到 Hacker News 上的討論「Diving Deep on S3 Consistency (allthingsdistributed.com)」才想到該整理一下,原文的「Diving Deep on S3 Consistency」是 Amazon 的 CTO Werner Vogels 花了一些篇幅描述 Amazon S3 怎麼把 Eventually Consistent 變成 Strongly Consistent,當初 Amazon S3 公告時我也有寫一篇文章提到:「Amazon S3 現在變成 Strong Read-After-Write Consistency 啦...」。

Amazon S3 之所以會是 Eventually Consisient 是因為 Metadata Subsystem 的 cache 設計:

Per-object metadata is stored within a discrete S3 subsystem. This system is on the data path for GET, PUT, and DELETE requests, and is responsible for handling LIST and HEAD requests. At the core of this system is a persistence tier that stores metadata. Our persistence tier uses a caching technology that is designed to be highly resilient. S3 requests should still succeed even if infrastructure supporting the cache becomes impaired. This meant that, on rare occasions, writes might flow through one part of cache infrastructure while reads end up querying another. This was the primary source of S3’s eventual consistency.

如果要解決 Eventually Consistent,最直接的想法是拔掉 cache,但這樣對效能的影響太大,所以得在要保留 cache 的情況下設計,所以就想到用其他管道確保 cache 裡的資料狀態是正確的:

One early consideration for delivering strong consistency was to bypass our caching infrastructure and send requests directly to the persistence layer. But this wouldn’t meet our bar for no tradeoffs on performance. We needed to keep the cache. To keep values properly synchronized across cores, CPUs implement cache coherence protocols. And that’s what we needed here: a cache coherence protocol for our metadata caches that allowed strong consistency for all requests.

而接下來是設計一連串的邏輯確保每個 S3 object 的操作都有 serializability:

We had introduced new replication logic into our persistence tier that acts as a building block for our at-least-once event notification delivery system and our Replication Time Control feature. This new replication logic allows us to reason about the “order of operations” per-object in S3. This is the core piece of our cache coherency protocol.

後面又要確保這個 cache coherence 的 HA,最後要能夠驗證實做上的正確性,花的力氣比實做協定本身還多:

These verification techniques were a lot of work. They were more work, in fact, than the actual implementation itself. But we put this rigor into the design and implementation of S3’s strong consistency because that is what our customers need.

Amazon S3 算是 AWS 當初推出來的招牌,當時的 Amazon S3 底層的論文「Amazon's Dynamo」劇烈影響了後來整個產業 (雖然論文裡面是拿 Amazon 的購物車說明),這次的補充算是更新了原來論文的技術,告訴大家本來的 Eventually Consistent 是可以再拉到 Strongly Consistent。

Amazon EC2 提供跨區直接複製 AMI (Image) 的功能

Amazon EC2AMI 可以跨區複製了:「Amazon EC2 now allows you to copy Amazon Machine Images across AWS GovCloud, AWS China and other AWS Regions」。

如同公告提到的,在這個功能出來以前,想要產生一樣的 image 得重新在 build 一份:

Previously, to copy AMIs across these AWS regions, you had to rebuild the AMI in each of them. These partitions enabled data isolation but often made this copy process complex, time-consuming and expensive.

有一些限制,image 大小必須在 1TB 以下,另外需要存到 S3 上,不過這些限制應該是還好:

This feature provides a packaged format that allows AMIs of size 1TB or less to be stored in AWS Simple Storage Service (S3) and later moved to any other region.

然後目前只有透過 cli 操作的方式,或是直接用 SDK 呼叫 API,看起來 web console 還沒提供:

This functionality is available through the AWS Command Line Interface (AWS CLI) and the AWS Software Development Kit (AWS SDK). To learn more about copying AMIs across these partitions, please refer to the documentation.