Amazon S3 變成 Strong Consistency 背後的改善方式

看到 Hacker News 上的討論「Diving Deep on S3 Consistency (allthingsdistributed.com)」才想到該整理一下,原文的「Diving Deep on S3 Consistency」是 Amazon 的 CTO Werner Vogels 花了一些篇幅描述 Amazon S3 怎麼把 Eventually Consistent 變成 Strongly Consistent,當初 Amazon S3 公告時我也有寫一篇文章提到:「Amazon S3 現在變成 Strong Read-After-Write Consistency 啦...」。

Amazon S3 之所以會是 Eventually Consisient 是因為 Metadata Subsystem 的 cache 設計:

Per-object metadata is stored within a discrete S3 subsystem. This system is on the data path for GET, PUT, and DELETE requests, and is responsible for handling LIST and HEAD requests. At the core of this system is a persistence tier that stores metadata. Our persistence tier uses a caching technology that is designed to be highly resilient. S3 requests should still succeed even if infrastructure supporting the cache becomes impaired. This meant that, on rare occasions, writes might flow through one part of cache infrastructure while reads end up querying another. This was the primary source of S3’s eventual consistency.

如果要解決 Eventually Consistent,最直接的想法是拔掉 cache,但這樣對效能的影響太大,所以得在要保留 cache 的情況下設計,所以就想到用其他管道確保 cache 裡的資料狀態是正確的:

One early consideration for delivering strong consistency was to bypass our caching infrastructure and send requests directly to the persistence layer. But this wouldn’t meet our bar for no tradeoffs on performance. We needed to keep the cache. To keep values properly synchronized across cores, CPUs implement cache coherence protocols. And that’s what we needed here: a cache coherence protocol for our metadata caches that allowed strong consistency for all requests.

而接下來是設計一連串的邏輯確保每個 S3 object 的操作都有 serializability:

We had introduced new replication logic into our persistence tier that acts as a building block for our at-least-once event notification delivery system and our Replication Time Control feature. This new replication logic allows us to reason about the “order of operations” per-object in S3. This is the core piece of our cache coherency protocol.

後面又要確保這個 cache coherence 的 HA,最後要能夠驗證實做上的正確性,花的力氣比實做協定本身還多:

These verification techniques were a lot of work. They were more work, in fact, than the actual implementation itself. But we put this rigor into the design and implementation of S3’s strong consistency because that is what our customers need.

Amazon S3 算是 AWS 當初推出來的招牌,當時的 Amazon S3 底層的論文「Amazon's Dynamo」劇烈影響了後來整個產業 (雖然論文裡面是拿 Amazon 的購物車說明),這次的補充算是更新了原來論文的技術,告訴大家本來的 Eventually Consistent 是可以再拉到 Strongly Consistent。

Akamai 也推出了 Key-Value 服務 EdgeKV

沒介紹過 Akamai 的一些架構,先講到 Akamai 的 Edge 端 Serverless 架構是 EdgeWorkers,跑的是 JavaScript:

EdgeWorkers lets developers just code — integrating into existing CI/CD workflows and enabling multiple teams to work in parallel using JavaScript. EdgeWorkers eliminates the hassle of managing compute resources and building for scale.

然後這次推出的是 EdgeKV,目前還在 Beta 版:「Serverless Storage at the Edge (EdgeKV Beta)」。

如同名字所說的,架構上 Key-Value 架構,放棄了 CAP theorem 裡面的 C,改走 Eventual Consistency:

EdgeKV uses what is known in distributing computing as an eventual consistency model to perform writes and updates. This model achieves high availability with low read latency by propagating data writes globally. The period of time it takes the system to distribute data globally is called the “inconsistency window”.

隔壁 Cloudflare Workers KV 也是 Eventual Consistency (出自「How KV works」這邊):

KV achieves this performance by being eventually-consistent. Changes are immediately visible in the edge location at which they're made, but may take up to 60 seconds to propagate to all other edge locations.

看起來算是補上競爭對手的產品線...

Etsy 介紹的 Cache Smearing

Etsy 的 engineering blog 上提到了他們怎麼設計 cache 機制:「How Etsy caches: hashing, Ketama, and cache smearing」。

使用 consistent hash 已經是基本款了,文章裡花了一些篇幅介紹為什麼要用 consistent hash。

後半段則是有了 consistent hash 後會遇到的問題,也就是講 hot key 怎麼處理:有些資料非常熱 (常常被存取),就算用 consistent hash 也還是有可能搞爆單一機器。

他們做了幾件事情,第一件事情是設計 cache smearing 機制,把單一資料加上 random key,使得不同的 key 會打散到不同的機器上:

Let’s take an example of a hot key popular_user_data. This key is read often (since the user is popular) and is hashed to pool member 3. Cache smearing appends a random number in a small range (say, [0, 8)) to the key before each read or write. For instance, successive reads might look up popular_user_data3, popular_user_data1, and popular_user_data6. Because the keys are different, they will be hashed to different hosts. One or more may still be on pool member 3, but not all of them will be, sharing the load among the pool.

第二件事情則是監控哪些 key 比較熱門:

We’ve seen this problem many times over the years, using mctop, memkeys, and our distributed tracing tools to track down hot keys.

第三件事情是維護 hot key 的清單 (不是每個 key 都會上 cache smearing):

We manually add cache smearing to only our hottest keys, leaving keys that are less read-heavy efficiently stored on a single host.

是個當規模大到單一 hot key 會讓單台伺服器撐不住時的 workaround...

MongoDB 的 consistent backup

PerconaGitHub 上放出 MongoDB 的 consistent backup 工具:「Percona-Lab/mongodb_consistent_backup: 1.0 Release Explained」。

程式碼在「Percona-Lab/mongodb_consistent_backup」這邊,首頁也講了對應的條件。

Percona 在 MongoDB 上投入愈來愈多資源了... (但我還是沒很想用 XD)

Amazon DynamoDB Accelerator (DAX)

DynamoDB 推出的新架構,在系統上幫忙處理 cache:「Amazon DynamoDB Accelerator (DAX) – In-Memory Caching for Read-Intensive Workloads」。

DAX 跟現有的 DynamoDB API 相容:

DAX is a fully managed caching service that sits (logically) in front of your DynamoDB tables. It operates in write-through mode, and is API-compatible with DynamoDB.

因為 cache 的緣故,會是 eventually-consistent 架構:

Responses are returned from the cache in microseconds, making DAX a great fit for eventually-consistent read-intensive workloads.

然後是 r3 系列的機器組成的,限制在十台 (冒出大大的問號):

Each DAX cluster can contain 1 to 10 nodes; you can add nodes in order to increase overall read throughput. The cache size (also known as the working set) is based on the node size (dax.r3.large to dax.r3.8xlarge) that you choose when you create the cluster. Clusters run within a VPC, with nodes spread across Availability Zones.

不是很清楚這樣的好處 (比起自己用 memcached 或是其他類似的 cache 架構),也許過幾天想通了會開竅... :o

在 ext4 上的 CCFS

在「Application crash consistency and performance with CCFS」這篇看到的東西。

CCFS 目標是拉高 ext4 的 data integrity,並且還是有高效能:

CCFS (the Crash-Consistent File System) is an extension to ext4 that restores ordering and weak atomicity guarantees for applications, while at the same time delivering much improved performance.

如果你需要絕對的 data integrity,你需要用 data=journal 確保資料可以在 system crash 後被 replay,預設的 data=ordered 是無法達到的,而 CCFS 也沒打算達到絕對的 data integrity,而是盡量達到。所以在測試上可以發現 CCFS 大幅改善了 data integrity:

而效能還提昇了 (喂喂):

這真是太神奇了...

翻了一下好像沒 open source 出來 (至少現在沒看到),來等看看有沒有人會實做出來...

Google 提出的 Jump Consistent Hash

堆了一陣子的文章...

當有 n 台 cache server 給你使用,最傳統的作法是 hash(key) % n 決定挑哪一台 cache server,但這個方法在增加或減少機器時就會讓 cache 大規模失效。Consistent Hashing 就是希望在這種情況下 (增加或是移除 cache server 時) 可以避免大規模 cache 失效的問題。

比較基本的 Consistent Hash 作法是將 hash(key) 的值對應到 ring 上 (假設是 hash value 的範圍是 32bits,也就是 0 到 0xFFFFFFFF,那麼 ring 繞一圈平均分佈),像是一開始只有三台,所以就是三個 bucket:

取自「Consistent hashing」這裡。

等加了機器後就分擔掉本來的 hash value 區段:

取自「Consistent hashing」這裡。

這樣的作法通常會需要 O(log N) 的時間複雜度 (query & add/remove server) 與 O(n) 的空間複雜度。

另外會有平均性的問題需要解決,通常的 workaround 是讓一個 server 有很多 virtual bucket 在 ring 上面解,像是這樣:

取自「The Simple Magic of Consistent Hashing」。

Google 在 2014 年發表出來的 Jump Consistent Hash 則給了更好的方案,O(1) 的空間使用率以及 O(logN) 的時間複雜度 (query only),而且非常平均的打散:「A Fast, Minimal Memory, Consistent Hash Algorithm」。

論文裡面給了 C++ 的實作:

int32_t JumpConsistentHash(uint64_t key, int32_t num_buckets) {
  int64_t b = ­-1, j = 0;
  while (j < num_buckets) {
    b = j;
    key = key * 2862933555777941757ULL + 1;
    j = (b + 1) * (double(1LL << 31) / double((key >> 33) + 1));
  }
  return b;
}

神秘數字 (還出現 double) 與神秘的迴圈... 在 Performance Analysis 這段解釋了時間複雜度明顯跟迴圈有關,分析後可以得到 O(log N):

The time complexity of the algorithm is determined by the number of iterations of the while loop. [...] So the expected number of iterations is less than ln(n) + 1.

再來是 Consistent Hash 會討論「平均性」這個問題,可以看到跟以前的方法不是同一個級別:

執行的效率也非常好,要注意的是 X 軸是對數座標:

在 Note 的部份也說明了這個演算法 Google 沒有計畫要弄專利 (也公開出來了):

Google has not applied for patent protection for this algorithm, and, as of this writing, has no plans to. Rather, it wishes to contribute this algorithm to the community.

EMR 對 S3 Consistency 的補強

今年一月的時候,Netflix 曾經寫過一篇關於對 S3 的 Eventually Consistency 的問題:「Netflix 對 S3 的 Eventually Consistency 的補強...」,當時 Netflix 的作法是實做 s3mper 以確保一致性。

過了半年,AWS 的人在 EMR 上實做了類似的功能:「Consistent View for Elastic MapReduce's File System」。

看文章的說明,應該是用到 DynamoDB 負責 S3 上資料的狀態,而 DynamoDB 的資料並不會砍掉,所以在使用時要注意這點 :o

PHP 的 Memcached 的眉眉角角...

PHPMemcached 整理一下,未必適合其他人用。

設定上:

  • 平常應該要打開 libketama 相關設定,包含了 OPT_DISTRIBUTIONOPT_LIBKETAMA_COMPATIBLE
  • 多台 server 要注意使用 hostname 或是 IP address 連線 (尤其跨程式語言時),在 consistent hash 時會有差異。要避免因為 hostname 發生的問題,可以把這段設定放到 JSON 檔裡與其他程式語言共用。
  • 使用 SERIALIZER_JSON,一樣是為了與其他程式語言相容。

使用上:

  • add() 在 key 存在時會失敗,set() 則會覆蓋過去。
  • add()set() 裡的 expiration 參數是 UNIX timestamp,而非直覺的秒數...
  • get() 的 callback 不應該使用,因為無法設定 expire time。
  • memcached 的 manual 有寫預設值是使用冒號 (:) 當作 key 的分隔,這對於統計資料會有幫助。

先整理到這...