Cloudflare 推出了讓你買 cache 空間的 Cache Reserve

這幾天 Cloudflare 推出了一大包東西,其中一個是 Cache Reserve:「Introducing Cache Reserve: massively extending Cloudflare’s cache」。

一般的使用情境是依照 LRU 演算法在決定 Cloudflare 的 cache 滿的時候要排除誰:

We do eviction based on an algorithm called “least recently used” or LRU. This means that the least-requested content can be evicted from cache first to make space for more popular content when storage space is full.

Cache Reserve 就是自己買 cache 空間,他的作法是你付 R2 的空間費用:

Cache Reserve is a large, persistent data store that is implemented on top of R2.

這樣就可以完全依照 Cache-Control 這類 HTTP header 內的時間保存了,你就不用擔心會被 purge 掉,首先價錢包括了 R2 的部份:

The Cache Reserve Plan will mimic the low cost of R2. Storage will be $0.015 per GB per month and operations will be $0.36 per million reads, and $4.50 per million writes.

另外還有還沒公告的 Cache Reserve 的部份:

(Cache Reserve pricing page will be out soon)

對於很極致想要拼 hit rate 的使用者來說是個選擇就是了,另外可以想到直播相關的協定 (像是 HLS) 好像可以這樣搞來壓低對 origin server 的壓力?

GitHub 重新定位 Redis 的功能...

GitHub Engineering 說明了他們為什麼改變 Redis 的使用情境:「Moving persistent data out of Redis」。

GitHub 裡面,Redis 有兩種不同的情境,一種叫做 transient Redis,只用做 cache:

We used it as an LRU cache to conveniently store the results of expensive computations over data originally persisted in Git repositories or MySQL. We call this transient Redis.

另外一種則是打開 persistence 功能,叫做 persistent Redis:

We also enabled persistence, which gave us durability guarantees over data that was not stored anywhere else. We used it to store a wide range of values: from sparse data with high read/write ratios, like configuration settings, counters, or quality metrics, to very dynamic information powering core features like spam analysis. We call this persistent Redis.

這邊講的是 persistent Redis 被換成用 MySQL (InnoDB) 儲存:

Recently we made the decision to disable persistence in Redis and stop using it as a source of truth for our data. The main motivations behind this choice were to:

  • Reduce the operational cost of our persistence infrastructure by removing some of its complexity.
  • Take advantage of our expertise operating MySQL.
  • Gain some extra performance, by eliminating the I/O latency during the process of writing big changes on the server state to disk.

For the majority of callsites, we replaced persistent Redis with GitHub::KV, a MySQL key/value store of our own built atop InnoDB, with features like key expiration. We were able to use GitHub::KV almost identically as we used Redis: from trending repositories and users for the explore page, to rate limiting to spammy user detection.

後面講了不少轉換的過程 (還包含了某些功能的改寫),但沒有講的太清楚為什麼不繼續使用 Redis。

目前只能就提到的三點問題來看,persistent 的 i/o 成本可能太高?而且難以再壓榨效能出來?而相反的,InnoDB 已經花了很多力氣在上面,直接拿來用反而可以解決問題?

不過看得出來這個轉換還是花了不少力氣,看得出來有些 application 使用 Redis 的模式不能直接搬到 InnoDB 上,花了時間改寫...

Facebook 因為 Connection Pool 選擇機制,加上系統的複雜性而導致的慘案...

Facebook 的 engineer 寫了一篇文章,說明他們花了超過兩年的時間找到一個 bug:「Solving the Mystery of Link Imbalance: A Metastable Failure State at Scale」。

整個故事是個通靈的故事...

Facebook 在底層的架構使用了 Link Aggregation 的規劃,多條線路 channel bonding 在一起連到骨幹上。但發現有時候會卡在某一條線路壅塞而導致 system failure。

於是就一路追下去,從 switch 本身開始懷疑,最後去組織跨部門的研究小組跳下去分析 (通靈)。後來才觀察到是因為 connection pool 的機制本身用的演算法在 Facebook 這個複雜的系統架構下造成的慘案...

當 query burst 發生時,Facebook 的系統會同時到 50~100 組資料庫撈資料出來寫入 cache,而 connection pool 的機制用的是 MRU (Most Recently Used),從 congestion link 回來的 connection 會在 pool 裡面的最上方,於是就愈來愈塞...

知道問題後,解決的方法就簡單多了。只是把 connection 選擇演算法從 MRU 換成 LRU 後就解決了,但中間用了超過兩年的時間,以及至少 30 個人的努力才把問題找出來並且解決。

可以看到最後銘謝的對象一卡車:

Thanks to all of the engineers who helped us manage and then fix this bug, including James Paussa, Ernesto Ovcharenko, Mark Drayton, Peter Hoose, Ankur Agrawal, Alexey Andreyev, Billy Choe, Brendan Cleary, JJ Crawford, Rodrigo Curado, Tim Eberhard, Kevin Federation, Hans Fugal, Mayuresh Gaitonde, CJ Infantino, Mark Marchukov, Chinmay Mehta, Murat Mugan, Austin Myzk, Gaya Nagarajan, Dmitri Petrov, Marco Rizzi, Rafael Rodriguez, Steve Shaw, Adam Simpkins, David Swafford, Wendy Tobagus, Thomas Tobin, TJ Trask, Diego Veca, Kaushik Veeraraghavan, Callahan Warlick, Jason Wilbanks, Jimmy Williams, and Keith Wright.

最後附上 Facebook 解釋的圖: