用 SATA 界面產生的電磁訊號突破 Air Gap 限制傳輸資料

Hacker News 首頁上看到「SATAn: Air-Gap Exfiltration Attack via Radio Signals From SATA Cables」這個,透過 SATA 界面產生的電磁訊號突破 Air Gap 限制傳輸資料,對應的討論在「SATAn: Air-Gap Exfiltration Attack via Radio Signals from SATA Cables (arxiv.org)」。

Although air-gap computers have no wireless connectivity, we show that attackers can use the SATA cable as a wireless antenna to transfer radio signals at the 6 GHz frequency band.

翻了一下論文裡面提到的距離,在 PC-1 上測試到 120cm,對應的 SNR 有 9db:

Table IV presents the signal-to-noise ratio (SNR) received with the three transmitting computers. The signal transmitted from PC-1 has a strength of 20 dB at 30 cm to 9 dB at 120 cm apart. The signal generated from PC-1 and PC-2 were significantly weaker, with 15 dB at 60 cm (PC-2) and 7 dB at 30 cm (PC-3).

另外大概是 PoC 的關係,只有簡單測一下是可行的 (對於真的有利用 air gap 的環境當作一種保護機制的威脅就夠大了),看起來沒有測極限可以跑多快:

We transmitted the data with a bit rate of 1 bit/sec, which is shown to be the minimal time to generate a signal which is strong enough for modulation.

關於反制的部份,這類的技術 (透過電磁訊號) 之前在其他的裝置上都有發生過,目前的 air gap 標準應該都有電磁訊號洩漏的防範了,這篇主要還是在展示 SATA 也可以這樣搞 XD

立端科技的 IIoT-I530

因為工作的關係,所以會關注一些特殊的硬體,但好像暫時找不到地方放,就丟在 blog 上面記錄好了...

這次看到的是支援一堆 PoE+ 的機器:「Tiger Lake-U system features dual 2.5GbE and six PoE+ ports」。

除了 PoE+ 以外另外有 mSATASATA 支援,然後還有一堆 M.2 的界面可以接 (好像是走 PCIe):

Lanner’s “IIoT-I530” embedded PC runs Linux on an 11th Gen U-series CPU and supplies with up to 64GB RAM, 2x 2.5GbE, 6x PoE+, 2x COM, 4x USB 3.0, 2x HDMI, 3x M.2, SATA, mSATA, and DIO.

Backblaze 採購硬碟的策略

在「How Backblaze Buys Hard Drives」這篇裡面提到了 Backblaze 採購硬碟的策略,可以看到完全都是偏成本走向,所以裡面的策略一般個人用不太到,一般企業也不應該照抄,但拿來看看還蠻有趣的...

像是因為硬碟太多,所以硬碟的使用電量是他們在評估成本時蠻重要的一環,這點在一般的情境下不太會考慮到:

Power draw is a very important metric for us and the high speed enterprise drives are expensive in terms of power cost. We now total around 1.5 megawatts in power consumption in our centers, and I can tell you that every watt matters for reducing costs.

另外也提到了 SMR 硬碟的特性,在單位成本雖然有比較高的容量,但導致架構面需要配合 (cache),而也會有工程端的成本提昇,所以不是很愛:

SMR would give us a 10-15% capacity-to-dollar boost, but it also requires host-level management of sequential data writing. Additionally, the new archive type of drives require a flash-based caching layer. Both of these requirements would mean significant increases in engineering resources to support and thereby even more investment. So all-in-all, SMR isn’t cost-effective in our system.

成本面上,他們觀察到的現象是每季會降 5%~10%:

Ideally, I can achieve a 5-10% cost reduction per terabyte per quarter, which is a number based on historical price trends and our performance for the past 10 years.

另外提到了用 SAS controller 可以接多個 SATA 硬碟的事情 (雖然還是成本考量),但這塊也蠻有趣的:

Longer term, one thing we’re looking toward is phasing out SATA controller/port multiplier combo. This might be more technical than some of our readers want to go, but: SAS controllers are a more commonly used method in dense storage servers. Using SATA drives with SAS controllers can provide as much as a 2x improvement in system throughput vs SATA, which is important to me, even though serial ATA (SATA) port multipliers are slightly less expensive. When we started our Storage Pod construction, using SATA controller/port multiplier combo was a great way to keep costs down. But since then, the cost for using SAS controllers and backplanes has come down significantly.

家裡電腦裝 Ubuntu 18.04

上個禮拜四家裡的桌機開不了機,找了一天發現是系統的 SSD 掛掉了,就買了張 M.2 SSD,然後計畫順便把本來的 Ubuntu 16.04 升級到 Ubuntu 18.04,但 Ubuntu 18.04 把預設的界面從 Unity 換成 GNOME (然後披上 Unity 的皮),加上前陣子系統從 Intel 平台換到 AMD,整個狀況變得超混亂之後,就變成一連串踩地雷的過程...

最一開始是 UEFI + LUKS 的安裝問題,本來想裝到 M.2 SSD 上面,但 Ubuntu 18.04 的 grub-install 就是硬寫到 /dev/sda 不能改:「“Unable to install GRUB in /dev/sda” when installing GRUB」,照著這篇的 workaround 用還是不行,最後放棄,直接生一顆 SATA SSD 接到 SATA Port 1,把 M.2 當作資料碟。

硬體相關的問題:

軟體相關的問題:

  • 目前不支援從 GUI 設定 PPPoE 的網路 (沃槽),幾種方式裡面我推薦用 pppoeconf 設定會比較好,然後可以改 /etc/ppp/options 加上 IPv6 的設定。
  • 本來想裝 gnome-shell-extension-system-monitor 觀察系統狀態,但會造成系統超級卡,關掉後就變成普通的卡 (後來就找到 Intel I211-AT 的那個問題了)。

現在至少是堪用的程度了,接下來就是不斷的補各種設定...

PCIe 的 SSD 與 SATA 的比較

LogicMonitor 的人比較了 PCIe SSD 與 SATA SSD,他們在意的重點是 read/write latency 非單純的 throughput:「Device Utilization of PCIe and SATA SSDs」。

文章裡講得很長,把他們找原因的過程寫出來,從 latency 的影響改變到 queue service 的變化:

後來換成 PCIe SSD 後 write latency 從 1.8ms 掉到 0.02ms 左右,大約是兩個零的差距。

另外文章裡也提到了 fio 這個測試工具,找時間來測試看看,熟悉一下...