Etsy 使用 Vitess 的過程

Etsy 寫了三偏關於使用 Vitess 解決資料庫效能問題的文章:「Scaling Etsy Payments with Vitess: Part 1 – The Data Model」、「Scaling Etsy Payments with Vitess: Part 2 – The “Seamless” Migration」、「Scaling Etsy Payments with Vitess: Part 3 – Reducing Cutover Risk」。

Vitess 是 YouTube 團隊開發出來的東西,試著透過一層 proxy 解決後端 MySQL 資料庫在 sharding 後查詢邏輯的問題。

有一些地方的資訊整理出來:

首先是現代暴力解的能耐,從維基百科可以查到 Etsy 在 2015 年就上市了,但到了 2020 年年底撞到 vertically scaling 的天花板 (這邊是指 GCP 的上限),可以看到現在的暴力法可以撐超久... 如果再多考慮到實體機房的話應該可以找到更大台的機器。

第二個是 Etsy 在 2020 年年底開始從資料庫搬資料,一路到 2022 年五月,算起來差不多搬了一年半,總共轉移了 4 個 database 到 Vitess 的 cluster 上,共 23 張表格與 40B rows。

第三個是利用 Vindexes 這個技術降低 sharding 時所帶來的限制。這個之前沒研究過:

A Vindex provides a way to map a column value to a keyspace ID.

從「Older Version Docs」這邊翻舊版的文件,發現 5.0+ 都有,再往 GitHub 上面的資料翻,看起來從 2016 年的版本就有了,不過當時看起來還一直在擴充:「Vitess v2.0.0-rc.1」。

回來看現在的功能,有 primary vindex 的設計:

The Primary Vindex for a table is analogous to a database primary key. Every sharded table must have one defined. A Primary Vindex must be unique: given an input value, it must produce a single keyspace ID.

然後是 secondary vindex(es) 的設計,指到 keyspace id(s),然後這個資訊會被用在 routing 上:

Secondary Vindexes are additional vindexes against other columns of a table offering optimizations for WHERE clauses that do not use the Primary Vindex. Secondary Vindexes return a single or a limited set of keyspace IDs which will allow VTGate to only target shards where the relevant data is present. In the absence of a Secondary Vindex, VTGate would have to send the query to all shards (called a scatter query).

It is important to note that Secondary Vindexes are only used for making routing decisions. The underlying database shards will most likely need traditional indexes on those same columns, to allow efficient retrieval from the table on the underlying MySQL instances.

然後是 functional vindex 與 lookup vindex,前者用演算法定義 keyspace id,後者讓你查:

A Functional Vindex is a vindex where the column value to keyspace ID mapping is pre-established, typically through an algorithmic function. In contrast, a Lookup Vindex is a vindex that provides the ability to create an association between a value and a keyspace ID, and recall it later when needed. Lookup Vindexes are sometimes also informally referred to as cross-shard indexes.

然後 lookup vindex 還有對 consistent hashing 的支援:

Consistent lookup vindexes use an alternate approach that makes use of careful locking and transaction sequences to guarantee consistency without using 2PC. This gives the best of both worlds, with the benefit of a consistent cross-shard vindex without paying the price of 2PC. To read more about what makes a consistent lookup vindex different from a standard lookup vindex read our consistent lookup vindexes design documentation.

這樣整體看起來,Vitess 把所有常見的 sharding 方式都包進去了,如果以後真的遇到這個量的話,也不需要自己在 application 或是 library 做一堆事情了...

Etsy 介紹的 Cache Smearing

Etsy 的 engineering blog 上提到了他們怎麼設計 cache 機制:「How Etsy caches: hashing, Ketama, and cache smearing」。

使用 consistent hash 已經是基本款了,文章裡花了一些篇幅介紹為什麼要用 consistent hash。

後半段則是有了 consistent hash 後會遇到的問題,也就是講 hot key 怎麼處理:有些資料非常熱 (常常被存取),就算用 consistent hash 也還是有可能搞爆單一機器。

他們做了幾件事情,第一件事情是設計 cache smearing 機制,把單一資料加上 random key,使得不同的 key 會打散到不同的機器上:

Let’s take an example of a hot key popular_user_data. This key is read often (since the user is popular) and is hashed to pool member 3. Cache smearing appends a random number in a small range (say, [0, 8)) to the key before each read or write. For instance, successive reads might look up popular_user_data3, popular_user_data1, and popular_user_data6. Because the keys are different, they will be hashed to different hosts. One or more may still be on pool member 3, but not all of them will be, sharing the load among the pool.

第二件事情則是監控哪些 key 比較熱門:

We’ve seen this problem many times over the years, using mctop, memkeys, and our distributed tracing tools to track down hot keys.

第三件事情是維護 hot key 的清單 (不是每個 key 都會上 cache smearing):

We manually add cache smearing to only our hottest keys, leaving keys that are less read-heavy efficiently stored on a single host.

是個當規模大到單一 hot key 會讓單台伺服器撐不住時的 workaround...

Etsy 如何用 Let's Encrypt 的 SSL certificate 做生意...

Etsy 的「How Etsy Manages HTTPS and SSL Certificates for Custom Domains on Pattern」這篇文章講了如何用 Let's Encrypt 實作 Custom Domain。

主要是因為 Let's Encrypt 在設計時就考慮到的 auto-renew 機制,可以全自動處理後續的動作。這使得接 Let's Encrypt 比起接其他家來得容易 (而且省掉許多費用與合約上要處理的問題)。

文章後半段則是討論另外一個問題:當你有上千把 private key (& certificate) 時要怎麼管理,以確保這些 private key 都夠安全。其中有提到未來打算要引入 HSM

One of our stretch goals is to look into deploying HSMs. If there are bugs in the underlying software, the integrity of the entire system could be compromised thus voiding any guarantees we try to keep. While bugs are inevitable, moving critical cryptographic functions into secure hardware will mitigate their impact.

由於不太可能是把所有的 private key 塞到 HSM 裡面,應該是用 HSM 管理加密後的 private key,可以想像一下整個系統又會多了好幾個元件將責任拆開...

Etsy 用 SSD 的故事

EtsyLaurie Denness 對於 Etsy 使用各種品牌 SSD 的情況給出了他的經歷:「SSDs: A gift and a curse」。

重點在於開頭說的:

SSD firmware is buggy

可以看到當 SSD 配上 RAID controller 的時候,常常會需要找問題... (而且很難找)

Intel 的評價很不錯:

Okay, bad start, we’ve actually had no issues with Intel. This seems to be common across other companies we’ve spoken to.

OCZ 倒了,被 Toshiba 收購,而且 S.M.A.R.T. 資訊很差,很難預測什麼時候會掛掉 (有助於提前替換):

However, they had poor SMART info (none) so predicting failures was hard.

HP 是個大黑盒:

Unfortunately, HP have proprietary RAID controllers, and they don’t support SMART. Or rather, they refuse to talk to non-HP drives using off the shelf technology, they have their own methods.

Samsung 的評價不錯,C/P 值很高,而且有 S.M.A.R.T.:

Samsung saved the day and picked up from OCZ with a ludicrously cheap 960GB offering, the 840 EVO. A consumer drive, so very limited warranty, but for the price (~$400-500) you got great IOPS and they were reliable. They had better SMART info, and seemed to play nicely with our hardware.

不過 BB6Q 版的韌體搞爆了效能,雖然最後修好了:「Samsung Releases Firmware Update to Fix the SSD 840 EVO Read Performance Bug」。

LiteOn 則是掛在 GC 上 (RAID 裡同時掛掉兩顆以上):

The SSDs were having extended garbage collection periods, exacerbated by a smaller amount of SSDs with higher IO, in RAID6. This caused the controller to kick the drive out of the array… and unfortunately due to the write levelling across the drives, at least two of them were garbage collecting at the same time, destroying the array integrity.

不過後來 Dell 與 LiteOn 分別就 RAID controller 與 SSD 本身都跳下去修正,最後還是解決了:

Dell and LiteOn together identified and fixed weaknesses in their RAID controller, the backplane and the SSD firmware.

算是經驗分享,在 SSD 硬碟成熟的過程中間必經的道路 XD

Domain Sharding 的調整...

Domain Sharding 是針對以往瀏覽器常見的「加速技巧」(workaround),目的是突破瀏覽器對單一 domain 的最大連線速限制。像是 IE{6,7} 在 HTTP/1.1 上的限制是 2。

Steve Souders 在 2008 年整理的「Roundup on Parallel Connections」就有列出當時各瀏覽器的限制。而在 BrowserscopeNetwork 可以看到更多新的數字。

而隨著環境一直在改變,桌機限制的連線數也逐漸調高,以及 SPDY 的發展,再加上行動平台的比重愈來愈高,本來的 Domain Sharding 技巧需要重新審視。

Etsy 的「Reducing Domain Sharding」這篇文章中提到他們決定減少 Domain Sharding 的數量 (由四個變成兩個),而改善了反應時間:

  • 在圖片較多的頁面上約減少 50ms~80ms,在一般頁面則是減少 30ms~50ms。
  • 在行動平台上減少 500ms。

這兩個改善使得每次造訪的點閱率多了 0.27 page。

尤其是行動平台上對 Domain Sharding 的敏感程度,讓現在設計網站的人要考慮的更多了...

memkeys:用 C++ 寫的 mctop (memcache top)

在「mctop:memcache top」介紹過由 Etsy 所開發的 memcache top 工具 mctop

這套軟體用 Ruby 寫,其實就是個 sniffer + packet analyzer,但這套軟體有效能問題。在流量很高的時候無法處理所有封包,而變成 sampling 類型的監控。

Tumblr 用 C++ 新寫了一個版本,叫做 memkeys。依照軟體的說明,在 1Gbps 滿載時 mctop 約 50% 到 75% 的 packet drop (sampling rate 約 25% 到 50%),而 memkeys 只有 3% packet drop (sampling rate 約 97%):「Open Source - Memcache Top」。

This was originally inspired by mctop from etsy. I found that under load mctop would drop between 50 and 75 percent of packets. Under the same load memkeys will typically drop less than 3 percent of packets. This is on a machine saturating a 1Gb network link.

效能好不少 :p