Backblaze 採購硬碟的策略

在「How Backblaze Buys Hard Drives」這篇裡面提到了 Backblaze 採購硬碟的策略,可以看到完全都是偏成本走向,所以裡面的策略一般個人用不太到,一般企業也不應該照抄,但拿來看看還蠻有趣的...

像是因為硬碟太多,所以硬碟的使用電量是他們在評估成本時蠻重要的一環,這點在一般的情境下不太會考慮到:

Power draw is a very important metric for us and the high speed enterprise drives are expensive in terms of power cost. We now total around 1.5 megawatts in power consumption in our centers, and I can tell you that every watt matters for reducing costs.

另外也提到了 SMR 硬碟的特性,在單位成本雖然有比較高的容量,但導致架構面需要配合 (cache),而也會有工程端的成本提昇,所以不是很愛:

SMR would give us a 10-15% capacity-to-dollar boost, but it also requires host-level management of sequential data writing. Additionally, the new archive type of drives require a flash-based caching layer. Both of these requirements would mean significant increases in engineering resources to support and thereby even more investment. So all-in-all, SMR isn’t cost-effective in our system.

成本面上,他們觀察到的現象是每季會降 5%~10%:

Ideally, I can achieve a 5-10% cost reduction per terabyte per quarter, which is a number based on historical price trends and our performance for the past 10 years.

另外提到了用 SAS controller 可以接多個 SATA 硬碟的事情 (雖然還是成本考量),但這塊也蠻有趣的:

Longer term, one thing we’re looking toward is phasing out SATA controller/port multiplier combo. This might be more technical than some of our readers want to go, but: SAS controllers are a more commonly used method in dense storage servers. Using SATA drives with SAS controllers can provide as much as a 2x improvement in system throughput vs SATA, which is important to me, even though serial ATA (SATA) port multipliers are slightly less expensive. When we started our Storage Pod construction, using SATA controller/port multiplier combo was a great way to keep costs down. But since then, the cost for using SAS controllers and backplanes has come down significantly.

家裡電腦裝 Ubuntu 18.04

上個禮拜四家裡的桌機開不了機,找了一天發現是系統的 SSD 掛掉了,就買了張 M.2 SSD,然後計畫順便把本來的 Ubuntu 16.04 升級到 Ubuntu 18.04,但 Ubuntu 18.04 把預設的界面從 Unity 換成 GNOME (然後披上 Unity 的皮),加上前陣子系統從 Intel 平台換到 AMD,整個狀況變得超混亂之後,就變成一連串踩地雷的過程...

最一開始是 UEFI + LUKS 的安裝問題,本來想裝到 M.2 SSD 上面,但 Ubuntu 18.04 的 grub-install 就是硬寫到 /dev/sda 不能改:「“Unable to install GRUB in /dev/sda” when installing GRUB」,照著這篇的 workaround 用還是不行,最後放棄,直接生一顆 SATA SSD 接到 SATA Port 1,把 M.2 當作資料碟。

硬體相關的問題:

軟體相關的問題:

  • 目前不支援從 GUI 設定 PPPoE 的網路 (沃槽),幾種方式裡面我推薦用 pppoeconf 設定會比較好,然後可以改 /etc/ppp/options 加上 IPv6 的設定。
  • 本來想裝 gnome-shell-extension-system-monitor 觀察系統狀態,但會造成系統超級卡,關掉後就變成普通的卡 (後來就找到 Intel I211-AT 的那個問題了)。

現在至少是堪用的程度了,接下來就是不斷的補各種設定...

PCIe 的 SSD 與 SATA 的比較

LogicMonitor 的人比較了 PCIe SSD 與 SATA SSD,他們在意的重點是 read/write latency 非單純的 throughput:「Device Utilization of PCIe and SATA SSDs」。

文章裡講得很長,把他們找原因的過程寫出來,從 latency 的影響改變到 queue service 的變化:

後來換成 PCIe SSD 後 write latency 從 1.8ms 掉到 0.02ms 左右,大約是兩個零的差距。

另外文章裡也提到了 fio 這個測試工具,找時間來測試看看,熟悉一下...