VirMach 炸開

之前有三台機器在 VirMach 上面,都是趁各種特價的時候買的,上面跑一些簡單的服務,結果最近炸光了:「VirMach Teeters at the Edge」。

差不多就是無預警更換 IP address,各種長時間的 admin panel 鎖住,所以只好把這些小東西改放到其他家的 VPS 上跑。

LEB 的討論串「★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More」這邊也是有大量的討論,不過基本上就只是抱怨集中處而已...

基本上公司應該是炸翻了,接下來照 LEB 的慣例應該是倒掉換殼重新來過?

Hacker News 前幾天炸很久的 root cause

前幾天 Hacker News 炸了很久,如果是從 Twitter 上的資料來看,是從 2022/07/08 14:08 UTC 這篇:

中間還原失敗 (2022/07/08 17:35 UTC):

到最後恢復 (2022/07/08 20:48 UTC):

Twitter 這邊的資料看起來差不多是六個小時多,以一個應該是只有 database 需要還原的站台來說的確是蠻久的,所以後續在「HN is up again」這邊就有在討論原因,裡面 HN 的老大 dang 也有提到 downtime 是七個小時多:

8 hours of downtime, but not data loss, since there was no data to lose during the downtime.

Last post before we went down (2022-07-08 12:46:04 UTC): https://news.ycombinator.com/item?id=32026565

First post once we were back up (2022-07-08 20:30:55 UTC): https://news.ycombinator.com/item?id=32026571 (hey, that's this thread! how'd you do that, tpmx?)

So, 7h 45m of downtime. What we don't know is how many posts (or votes, etc.) happened after our last backup, and were therefore lost. The latest vote we have was at 2022-07-08 12:46:05 UTC, which is about the same as the last post.

There can't be many lost posts or votes, though, because I checked HN Search (https://hn.algolia.com/) just before we brought HN back up, and their most recent comment and story were behind ours. That means our last backup on the ill-fated server was taken after the last API update (HN Search relies on our API), and the API gets updated every 30 seconds.

I'm not saying that's a rock-solid argument, but it suggests that 30 seconds is an upper bound on how much data we lost.

另外大家就在找 dang 的回應是什麼 (畢竟是第一手資料),用 Ctrl-F 找一下就看到有趣的猜測,從 32028511 這個節點可以看到這串有趣的討論,首先是 mikeiem

You are never going to guess how long the HN SSDs were in the servers... never ever... OK... I'll tell you: 4.5years. I am not even kidding.

然後是 kabdib 的回應:

Let me narrow my guess: They hit 4 years, 206 days and 16 hours . . . or 40,000 hours.

And that they were sold by HP or Dell, and manufactured by SanDisk.

Do I win a prize?

(None of us win prizes on this one).

接著就是 dang 說他覺得這個猜測很有可能:

Wow. It's possible that you have nailed this.

Edit: here's why I like this theory. I don't believe that the two disks had similar levels of wear, because the primary server would get more writes than the standby, and we switched between the two so rarely. The idea that they would have failed within hours of each other because of wear doesn't seem plausible.

But the two servers were set up at the same time, and it's possible that the two SSDs had been manufactured around the same time (same make and model). The idea that they hit the 40,000 hour mark within a few hours of each other seems entirely plausible.

Mike of M5 (mikiem in this thread) told us today that it "smelled like a timing issue" to him, and that is squarely in this territory.

後續他也從自家的 /newest 裡面撈了相關的資料出來,依照他撈出來的關鍵字,看起來是用 HPE 出的 SSD:

It's also an example of the dharma of /newest – the rising and falling away of stories that get no attention:

HPE releases urgent fix to stop enterprise SSDs conking out at 40K hours - https://news.ycombinator.com/item?id=22706968 - March 2020 (0 comments)

HPE SSD flaw will brick hardware after 40k hours - https://news.ycombinator.com/item?id=22697758 - March 2020 (0 comments)

Some HP Enterprise SSD will brick after 40000 hours without update - https://news.ycombinator.com/item?id=22697001 - March 2020 (1 comment)

HPE Warns of New Firmware Flaw That Bricks SSDs After 40k Hours of Use - https://news.ycombinator.com/item?id=22692611 - March 2020 (0 comments)

HPE Warns of New Bug That Kills SSD Drives After 40k Hours - https://news.ycombinator.com/item?id=22680420 - March 2020 (0 comments)

(there's also https://news.ycombinator.com/item?id=32035934, but that was submitted today)

這次 downtime 看起來很像是中了 SSD firmware bug,目前看起來先搬到 EC2 上面了:

$ host news.ycombinator.com
news.ycombinator.com has address 50.112.136.166
$ host 50.112.136.166      
166.136.112.50.in-addr.arpa domain name pointer ec2-50-112-136-166.us-west-2.compute.amazonaws.com.

看討論串應該是暫時性的?

Slack 在 2022/02/22 發生的 downtime 說明

Slack 針對今年年初的爆炸提出了說明:「Slack’s Incident on 2-22-22」,但真正的重點都在 Hacker News 的討論串上:「Slack’s Incident on 2-22-22 (slack.engineering)」。

大概有三件事情可以講,第一個是掛掉的原因,第二個是剛發出來的時候,一堆人對於標題用的「2-22-22」很感冒,第三個是剛剛 (一個小時前),Cal Henderson (Slack 的 CTO) 跑到 Hacker News 的討論串上回應...

Downtime 的部份

這次的 downtime 主要是發生在 Group Direct Message (GDM) 的部份:

A significant element of the datastore load appeared to be from a query that listed Group Direct Message (GDM) conversations by user. This operation is fronted by our cache tier, so the high query load seemed to indicate something was wrong with our caches.

這個 GDM 的查訊效率不高,而是靠 cache layer 撐住的,加上二月 22 日那天他們在更新 Consul 的 agent,導致 hit rate 的下降,以及遇到一個比較大的 peak time,接著就壓垮了資料庫。

oh,這中間還有 Vitess 一起進來打架,原文講的比較清楚,但需要花一些時間看。

2-22-22

剛發表出來的時候,其實大多數的討論反而是在討論「2-22-22」這件事情,這的確是很差的表示方法,尤其對於一份公告來說,不過這個問題本來就是個 flame war 等級的話題...

Slack CTO (Cal Henderson) 的回應

在重刷頁面的時候發現 iamcal 這個帳號的回應,而 Cal Henderson (Slack CTO) 的個人網站是 www.iamcal.com,雖然不確定這是不是本人帳號,但看起來之前在 2011 註冊後都沒動...

這個帳號回了兩個訊息,一個是提到 AWS 上其實很常看到 failure,需要靠本身架構的穩定性來撐:

Our underlying hardware (AWS) is nothing like this reliable. We see regular (several times a year) failure of racks of machines or whole DCs.

Across the whole fleet (all services), we lose 1-10 servers per day as a baseline. Major events are then on top of that and can impact thousand of hosts at once.

另外一個是反駁自以為的量級估算:

> Even the largest Slack instance probably has under 100,000 users and less than 1000 peak messages per second.

This is not true, by an order of magnitude.

好像還可以繼續在盯一下,不知道還會不會有回應...

低 Downtime 將 4TB 的 PostgreSQL 9.6 資料庫升級到 13 的故事

前幾天在 Hacker News 首頁上看到的文章,講怎麼把一個 4TB 的 PostgreSQL 從 9.6 升級到 13 的故事:「How we upgraded our 4TB Postgres database (retool.com)」,原文在「How Retool upgraded our 4 TB main application PostgreSQL database」,翻了一下 LinkedIn,這篇文章的作者 Peter Johnston 在 Retool 掛的是 Security Software Engineer,另外他也有在 Hacker News 上的討論出現 (帳號是 mrbabbage),可以搜尋翻翻看他的回覆。

看完文章後發現方法的概念其實不難,主要是要找到對的工具來用。基本的想法是先生出一個 initial dump,然後架構 logical replication,接下來就是處理各種因為在 4TB data 這個 scale 下會遇到的問題。

主要用到的工具是 Citus Data 的 Warp:「Citus warp: Database migrations without the pain」,不過這個工具的限制是表格必須都有 single column primary key,所以他們為了這次轉移也有小改 database schema 配合 Warp 的要求:

We had to do a bit of finagling to coax Warp into processing our database. Warp expects all tables to have a single column primary key, so we had to convert compound primary keys into unique constraints and add scalar primary keys. Otherwise, Warp was very straightforward to use.

另外針對比較大的兩個 append-only 的表格 (分別是 2TB 與 x00 GB) 做處理,在 initial dump 的階段不對這兩個表格做 replication,而是透過自製的 Python script 搬移:

To handle the two massive tables we skipped in Warp, we wrote a Python script to bulk transfer data from the old database server to the new.

然後是 foreign key 相關的關閉與重啟,這算是 RDBMS 在大量資料的 dump & restore 的標準作業了:

As you can see from the runbook above, one of the steps we had to do was to turn off and then re-enable foreign key constraint checks.

最後執行下去,整個 downtime 只有十五分鐘:

We scheduled the maintenance window late on Saturday, October 23rd, at the lowest period of Retool cloud traffic. With the configuration described above, we were able to bring up a new database server at version 13 in around 15 minutes, subscribed to changes at our 9.6 primary with logical decoding.

另外也有提到有計畫要 sharding,之後 main database 就有機會被拆小:

We anticipate we’ll have sharded our database by the end of that support window, and be performing our next substantial version upgrades incrementally.

整個計畫的核心概念不難,主要是要怎麼順出來並且執行...

跨雲端的 Zero Downtime 轉移

看到「Ask HN: Have you ever switched cloud?」這個討論,在講雲端之間的搬遷,其中 vidarh 的回答可以翻一下...

首先是他提到原因的部份,基本上都是因為錢的關係,從雲搬到另外一個雲,然後再搬到 Dedicated Hosting 上:

Yes. I once did zero downtime migration first from AWS to Google, then from Google to Hetzner for a client. Mostly for cost reasons: they had a lot of free credits, and moved to Hetzner when they ran out.

Their savings from using the credits were at least 20x what the migrations cost.

然後他也直接把整理的資料丟出來,首先是在兩端上都先建立 load balancer 類的服務:

* Set up haproxy, nginx or similar as reverse proxy and carefully decide if you can handle retries on failed queries. If you want true zero-downtime migration there's a challenge here in making sure you have a setup that lets you add and remove backends transparently. There are many ways of doing this of various complexity. I've tended to favour using dynamic dns updates for this; in this specific instance we used Hashicorp's Consul to keep dns updated w/services. I've also used ngx_mruby for instances where I needed more complex backend selection (allows writing Ruby code to execute within nginx)

再來是打通內網,其實就是 site-to-site VPN:

* Set up a VPN (or more depending on your networking setup) between the locations so that the reverse proxy can reach backends in both/all locations, and so that the backends can reach databases both places.

然後建立資料庫的 replication server 以及相關的機制:

* Replicate the database to the new location.

* Ensure your app has a mechanism for determining which database to use as the master. Just as for the reverse proxy we used Consul to select. All backends would switch on promoting a replica to master.

* Ensure you have a fast method to promote a database replica to a master. You don't want to be in a situation of having to fiddle with this. We had fully automated scripts to do the failover.

然後是確認 application 端可以切換自如:

* Ensure your app gracefully handles database failure of whatever it thinks the current master is. This is the trickiest bit in some cases, as you either need to make sure updates are idempotent, or you need to make sure updates during the switchover either reliably fail or reliably succeed. In the case I mentioned we were able to safely retry requests, but in many cases it'll be safer to just punt on true zero downtime migration assuming your setup can handle promotion of the new master fast enough (in our case the promotion of the new Postgres master took literally a couple of seconds, during which any failing updates would just translate to some page loads being slow as they retried, but if we hadn't been able to retry it'd have meant a few seconds downtime).

然後確認新的雲端有足夠的 capacity 撐住流量後,就是要轉移了,首先是降低 DNS TTL:

Once you have the new environment running and capable of handling requests (but using the database in the old environment):

* Reduce DNS record TTL.

然後把舊的 load balancer 指到新的後端,這時候如果發現問題可以快速 rollback 回來:

* Ensure the new backends are added to the reverse proxy. You should start seeing requests flow through the new backends and can verify error rates aren't increasing. This should be quick to undo if you see errors.

接著把 DNS 指到新的 load balancer,理論上應該不會有太大問題:

* Update DNS to add the new environment reverse proxy. You should start seeing requests hit the new reverse proxy, and some of it should flow through the new backends. Wait to see if any issues.

接著把資料庫切到新的機房,有問題時可以趕快切回去再確認哪邊有狀況:

* Promote the replica in the new location to master and verify everything still works. Ensure whatever replication you need from the new master works. You should now see all database requests hitting the new master.

最後的階段就是拔掉舊的架構:

* Drain connections from the old backends (remove them from the pool, but leave them running until they're not handling any requests). You should now have all traffic past the reverse proxy going via the new environment.

* Update DNS to remove the old environment reverse proxy. Wait for all traffic to stop hitting the old reverse proxy.

* When you're confident everything is fine, you can disable the old environment and bring DNS TTL back up.

其實這個方法跟雲端沒什麼關係,以前搞機房搬遷的時候應該都會規劃過類似的方案,大方向也都類似 (把 stateful services 與 stateless services 拆開來分析),只是不像雲端的彈性租賃,硬體要準備比較多...

我記得當年 Instagram 搬進 Facebook 機房的時候也有類似的計畫,之前有提過:「Instagram 從 AWS 搬到 Facebook 機房」。

台灣最近的話,好像是 PChome 24h 有把機房搬到 GCP 上面?看看他們之後會不會到 GCP 的場子上發表他們搬遷的過程...

這個月 GitHub 的不穩問題,都是 mysql1 這個 cluster 的鍋...

GitHub 針對了這個月的四次 downtime 說明,大致上都跟 mysql1 這組 cluster 有關:「An update on recent service disruptions」,這是 Keith Ballinger 發的文章,找了一下掛的頭銜是 SVP of Engineering at GitHub。

文章裡提到的 mysql1 在「Partitioning GitHub’s relational databases to handle scale」這邊可以看到一些資訊 (我在「GitHub 的 MySQL 架構與數字」這邊也有提到),基本上有 ProxySQL + Vitess 兩套方案在 scale,但可以看出來主資料庫本身還是有很大的 loading 在上面跑。

這次的問題是 mysql1 看起來這次遇到了效能上的瓶頸,不過還是沒找到原因,這可以從這幾次的說明看出來,從第一次的 outage:

The incident appeared to be related to peak load combined with poor query performance for specific sets of circumstances.

第二次的:

The following day, we saw the same peak traffic pattern and load on mysql1. We were not able to pinpoint and address the query performance issues before this peak, and we decided to proactively failover before the issue escalated.

第三次的:

While we had reduced load seen in the previous incidents, we were not fully confident in the mitigations.

In this third incident, we enabled memory profiling on our database proxy in order to look more closely at the performance characteristics during peak load.

到最近第四次的:

In order to reduce load, we throttled webhook traffic and will continue to use that as a mitigation to prevent future recurrence during peak load times as we continue to investigate further mitigations.

可以看到基本上還沒完,之後再遇到問題時應該還是會把 webhook traffic 拿出來開刀...

Airtable 的 Open Source 替代方案 Baserow

Hacker News Daily 上看到的產品 Baserow,另外在 Hacker News 的討論也有蠻多有趣的東西:「Baserow.io – Self-hosted Airtable alternative (baserow.io)」。

Baserow 的後端是用 Django 寫的,前端則主要是用 Vue.js

可以看到討論裡 Airtable 的問題,先不講定價問題,只講其他的部份... (因為就如同討論裡面有提到的,有可能是公司故意把定價策略訂成讓不願意花大錢的人使用)

首先是每個月都有「穩定的 downtime」,不過這邊的描述到的數字明顯有問題,一個月當了七個小時也有 99% uptime (兩個九)。

不過先撇開這點,付費服務一般都可以看到 99.9% 的 Monthly SLA (三個九,一個月只允許 43.2 分鐘),這個情況的確是不太符合業界的標準:

Reliable 1-5 hours a month of downtime. There are months where it's literally 1 nine of uptime. Their status page was actually down for a while (I think they forgot to renew it?)

另外一個有很趣的是很奇耙的設計:

Booleans in the API are `true` or `undefined`. Seriously.

看起來要有蠻多背景知識才有可能知道為什麼這樣搞 XDDD

先丟給 data team 看看,如果他們有興趣的話再找機會架起來...

不停機把 server 搬到兩百米外的機房

Hacker News Daily 上看到的有趣故事,作者在 Reddit 上描述怎麼不停機把實體的伺服器搬到兩百米外的機房,中間還經過了停車場:「[Rant... sorta] Physically moved a server today...」,另外作者的 FAQ 在「[FAQ][Rant... sorta] Physically moved a server today...」這邊可以看到。

我會把這個當故事看一看就好,裡面還是有一些細節沒有被敘述 (像是網路不斷線的部份),感覺不太對,但就一個故事來看是蠻有趣的 XD

搬運的過程中間包括了使用 UPS 與多顆 switch 對接,另外中間經過一個停車場,算是很有趣的方式?

Braintree (PayPal) 用 PostgreSQL 的方式

RDBMS 最困難的事情都圍繞在「怎麼不中斷服務」(很多事情在不用考慮 uptime/downtime 的前提下很好做,不論是 ALTER 或是 failover,到備份還原計畫),而 PayPalBraintree 在「PostgreSQL at Scale: Database Schema Changes Without Downtime」這邊討論修改 PostgreSQL 的 database schema 時怎麼不中斷服務。

文章內的大部份都是給 DBA 知道的細節 (e.g. 怎麼樣才不會觸發大規模的 lock 導致服務中斷),而不是開發者面向的事情... 但開頭的部份,也是我認為最重要的部份,則是需要 Developer 參與的:

For all code and database changes, we require that:

  • Live code and schemas be forward-compatible with updated code and schemas: this allows us to roll out deploys gradually across a fleet of application servers and database clusters.
  • New code and schemas be backward-compatible with live code and schemas: this allows us to roll back any change to the previous version in the event of unexpected errors.

為了符合這兩個要素,可能會在 schema 設計上有好幾個階段的操作,而非一次到位。而且也才能避免要關站從 backup 倒資料回來的情況...

建議可以研究看看要怎麼玩,常見的情境知道怎麼設計步驟後,真的遇到的時候會比較熟練。

1Password 使用 Terraform 的案例...

看到「Terraforming 1Password」這篇以及這則 Tweet,講 1Password 導入 Terraform 將會中斷服務幾個小時:

這根本是負面宣傳 XDDD

無論是對 1Password 的技術能力,或是 Terraform 的彈性來說...