NVIDIA 開源 Linux GPU Kernel Driver

NVIDIA 宣佈開源 Linux 下的 GPU Kernel Driver:「NVIDIA Releases Open-Source GPU Kernel Modules」。

從一些描述上可以看出來,應該是因為 Datacenter 端的動力推動的,所以這次 open source 的版本中,對 Datacenter GPU 的支援是 production level,但對 GeForce GPU 與 Workstation GPU 的支援直接掛 alpha level:

Which GPUs are supported by Open GPU Kernel Modules?

Open kernel modules support all Ampere and Turing GPUs. Datacenter GPUs are supported for production, and support for GeForce and Workstation GPUs is alpha quality. Please refer to the Datacenter, NVIDIA RTX, and GeForce product tables for more details (Turing and above have compute capability of 7.5 or greater).

然後 user-mode driver 還是 closed source:

Will the source for user-mode drivers such as CUDA be published?

These changes are for the kernel modules; while the user-mode components are untouched. So the user-mode will remain closed source and published with pre-built binaries in the driver and the CUDA toolkit.

nouveau 來說,是可以從 open source driver 裡面挖一些東西出來用,不過能挖到跟 proprietary 同樣效能水準嗎?

Ubuntu 22.04 LTS 出版

Okay,各家 DevOps Engineers & SRE 可能又要忙碌了,兩年一度的盛事,Ubuntu 22.04 LTS 出版:「Canonical Ubuntu 22.04 LTS is released」。

其中有一些比較特別的消息,像是這次是 Ubuntu Desktop 這邊正式支援 Raspberry Pi 4 平台:

For innovators on Raspberry Pi, Ubuntu 22.04 LTS marks the first LTS release with Ubuntu Desktop support on the Raspberry Pi 4.

照慣例先放一個月再看看,通常三個月後應該會把大家常見的問題都修差不多,我這邊 Desktop 是用換成 XfceXubuntu,要再等一下...

觀察誰在存取剪貼簿的工具 (X11 下)

兩個月前在 Hacker News 上看到的討論,有人想要知道誰在 X11 下存取剪貼簿:「Who keeps an eye on clipboard access? (ovalerio.net)」,原文在「Who keeps an eye on clipboard access?」這邊,作者用 Python 寫的程式則是在「clipboard-watcher」這邊。

馬上有想到 iOS 在 2020 年推出的機制:「iOS 14 clipboard notifications are annoying, but developer adoption of a new API will improve the experience」。

不過在 X11 上跑起來會發現冒出來的資訊量有點大,像是在瀏覽器操作 WordPress 寫文章時剪剪貼貼的時候就會狂噴,如果可以提供程式的白名單的話就更好了,畢竟是我直接把 clipboard API 裡讀取的功能直接拔掉 (但網站還是可以寫進去就是了),對我來說不會在意 browser 寫進去的情況:

另外程式有時候會卡住 (尤其是遇到圖片的剪輯時),算是 bug 吧...

然後 Hacker News 的討論串裡面有人提到一個有趣的設計,他希望限制那些不在焦點上面的程式去碰 clipboard:

By far the worst offense I've seen in clipboard privacy on the Linux desktop is RedHat's virt-manager. It sends your clipboard AND selection content to all virtual machines, even when they are not focused, with no indication that it's happening, and with no GUI option to turn it off. This is at odds with the common practice of running untrusted code in virtual machines.

這個想法好像不賴,理論上 clipboard 應該是在有互動的時候才會碰到的東西...

Linux 打算合併 /dev/random 與 /dev/urandom 遇到的問題

Hacker News 上看到「Problems emerge for a unified /dev/*random (lwn.net)」的,原文是「Problems emerge for a unified /dev/*random」(付費內容,但是可以透過 Hacker News 上的連結直接看)。

標題提到的兩個 device 的性質會需要一些背景知識,可以參考維基百科上面「/dev/random」這篇的說明,兩個都是 CSPRNG,主要的分別在於 /dev/urandom 通常不會 block:

The /dev/urandom device typically was never a blocking device, even if the pseudorandom number generator seed was not fully initialized with entropy since boot.

/dev/random 不保證不會 block,有可能會因為 entropy 不夠而卡住:

/dev/random typically blocked if there was less entropy available than requested; more recently (see below, different OS's differ) it usually blocks at startup until sufficient entropy has been gathered, then unblocks permanently.

然後順便講一下,因為這是 crypto 相關的設計修改,加上是 kernel level 的界面,安全性以及相容性都會是很在意的點,而 Hacker News 上的討論裡面很多是不太在意這些的,你會看到很多「很有趣」的想法在上面討論 XDDD

回到原來的文章,Jason A. Donenfeld (Linux kernel 裡 RNG maintainer 之一,不過近期比較知名的事情還是 WireGuard 的發明人) 最近不斷的在改善 Linux kernel 裡面這塊架構,這次打算直接拿 /dev/random 換掉 /dev/urandom:「Uniting the Linux random-number devices」。

不過換完後 Google 的 Guenter Roeck 就在抱怨在 QEMU 環境裡面炸掉了:

This patch (or a later version of it) made it into mainline and causes a large number of qemu boot test failures for various architectures (arm, m68k, microblaze, sparc32, xtensa are the ones I observed). Common denominator is that boot hangs at "Saving random seed:". A sample bisect log is attached. Reverting this patch fixes the problem.

他透過 git bisect 找到發生問題的 commit,另外從卡住的訊息也可以大概猜到在虛擬機下 entropy 不太夠。

另外從他們三個 (加上 Linus) 在 mailing list 上面討論的訊息可以看到不少交流:「Re: [PATCH v1] random: block in /dev/urandom」,包括嘗試「餵」entropy 進 /dev/urandom 的 code...

後續看起來還會有一些嘗試,但短期內看起來應該還是會先分開...

Linux 上 fcitx5 的小麥輸入法

Twitter 上看到小麥輸入法宣佈支援 Linuxfcitx5 的消息:

專案在 GitHub 上的「fcitx5-mcbopomofo: 小麥注音輸入法 fcitx5 模組」這邊,因為我自己的 Ubuntu 20.04 桌機還是跑 fcitx 4.x (用酷音輸入法,裝的是 fcitx-chewing 這個套件),暫時先放著好了,但幫忙宣傳一下...

也許可以找機會練習包到 Ubuntu 的 PPA 上面,等有空吧...

最近 Linux 核心安全性問題的 Dirty Pipe 故事很有趣...

Hacker News 上看到「The Dirty Pipe Vulnerability」這個 Linux kernel 的安全性問題,Hacker News 上相關的討論在「The Dirty Pipe Vulnerability (cm4all.com)」這邊可以看到。

這次出包的是 splice() 的問題,先講他寫出可重製 bug 的程式碼,首先是第一個程式用 user1 放著跑:

#include <unistd.h>
int main(int argc, char **argv) {
  for (;;) write(1, "AAAAA", 5);
}
// ./writer >foo

然後第二個程式也放著跑 (可以是不同的 user2,完全無法碰到 user1 的權限):

#define _GNU_SOURCE
#include <unistd.h>
#include <fcntl.h>
int main(int argc, char **argv) {
  for (;;) {
    splice(0, 0, 1, 0, 2, 0);
    write(1, "BBBBB", 5);
  }
}
// ./splicer <foo |cat >/dev/null

理論上不會在 foo 裡面看到任何 BBBBB 的字串,但卻打穿了... 透過 git bisect 的檢查,他也確認了是在「pipe: merge anon_pipe_buf*_ops」這個 commit 時出的問題。

不過找到問題的過程拉的頗長,一開始是有 web hosting 服務的 support ticket 說 access log 下載下來發現爛掉了,無法解壓縮:

It all started a year ago with a support ticket about corrupt files. A customer complained that the access logs they downloaded could not be decompressed. And indeed, there was a corrupt log file on one of the log servers; it could be decompressed, but gzip reported a CRC error.

然後他先手動處理就把票關起來了:

I fixed the file’s CRC manually, closed the ticket, and soon forgot about the problem.

接下來過幾個月後又發生,經過幾次的 support ticket 後他手上就有一些「資料」可以看:

Months later, this happened again and yet again. Every time, the file’s contents looked correct, only the CRC at the end of the file was wrong. Now, with several corrupt files, I was able to dig deeper and found a surprising kind of corruption. A pattern emerged.

然後因為發生的頻率也不是很高,加上邏輯上卡到死胡同,所以他也沒有辦法花太多時間在上面:

None of this made sense, but new support tickets kept coming in (at a very slow rate). There was some systematic problem, but I just couldn’t get a grip on it. That gave me a lot of frustration, but I was busy with other tasks, and I kept pushing this file corruption problem to the back of my queue.

後來真的花時間下去找,利用先前的 pattern 掃了一次系統 log,發現有規律在:

External pressure brought this problem back into my consciousness. I scanned the whole hard disk for corrupt files (which took two days), hoping for more patterns to emerge. And indeed, there was a pattern:

  • there were 37 corrupt files within the past 3 months
  • they occurred on 22 unique days
  • 18 of those days have 1 corruption
  • 1 day has 2 corruptions (2021-11-21)
  • 1 day has 7 corruptions (2021-11-30)
  • 1 day has 6 corruptions (2021-12-31)
  • 1 day has 4 corruptions (2022-01-31)

The last day of each month is clearly the one which most corruptions occur.

然後就試著寫各種 reproducible code,最後成功的版本就是開頭提到的,然後他發現這個漏洞可以是 security vulnerability,就回報出去了,可以看到前後從第一次的 support ticket 到最後解決花了快一年的時間,不過 Linux kernel 端修正的速度蠻快的:

  • 2021-04-29: first support ticket about file corruption
  • 2022-02-19: file corruption problem identified as Linux kernel bug, which turned out to be an exploitable vulnerability
  • 2022-02-20: bug report, exploit and patch sent to the Linux kernel security team
  • 2022-02-21: bug reproduced on Google Pixel 6; bug report sent to the Android Security Team
  • 2022-02-21: patch sent to LKML (without vulnerability details) as suggested by Linus Torvalds, Willy Tarreau and Al Viro
  • 2022-02-23: Linux stable releases with my bug fix (5.16.11, 5.15.25, 5.10.102)
  • 2022-02-24: Google merges my bug fix into the Android kernel
  • 2022-02-28: notified the linux-distros mailing list
  • 2022-03-07: public disclosure

整個故事還蠻精彩的 XD

Ingo Molnár 提出讓 Linux Kernel 編譯速度提昇的 mega patch

Hacker News 首頁上看到「Massive ~2.3k Patch Series Would Improve Linux Build Times 50~80% & Fix "Dependency Hell"」這個,對應到 mailing list 上的信件是「* [PATCH 0000/2297] [ANNOUNCE, RFC] "Fast Kernel Headers" Tree -v1: Eliminate the Linux kernel's "Dependency Hell"」這個,看到「0000/2297」這個 prefix XDDD

他主要是想要改善 Linux Kernel 的 compile 時間 (從 project 的名稱「Fast Kernel Headers」可以看到),只是沒想到會縮短這麼多。另外一方面也順便處理了 dependency hell 的問題 (改善維護性)。

測試出來的結果相當驚人,從 231.34 +- 0.60 secs (15.5 builds/hour) 到 129.97 +- 0.51 secs (27.7 builds/hour),以編譯次數來看的話是 78% 的改善。如果以 CPU time 來看的話,從 11,474,982.05 msec cpu-clock 降到 7,100,730.37 msec cpu-clock,也是以編譯次數來算的話,有 61.6% 的改善...

這是花了一年多的時間嘗試才達成的目標,嘗試不同的方法,前幾次雖然都有改善,但改善幅度太小,變動卻太大,他覺得不值得丟出來,直到第三次才達成這樣的目標...

第一次:

When I started this project, late 2020, I expected there to be maybe 50-100 patches. I did a few crude measurements that suggested that about 20% build speed improvement could be gained by reducing header dependencies, without having a substantial runtime effect on the kernel. Seemed substantial enough to justify 50-100 commits.

第二次:

But as the number of patches increased, I saw only limited performance increases. By mid-2021 I got to over 500 commits in this tree and had to throw away my second attempt (!), the first two approaches simply didn't scale, weren't maintainable and barely offered a 4% build speedup, not worth the churn of 500 patches and not worth even announcing.

第三次:

With the third attempt I introduced the per_task() machinery which brought the necessary flexibility to reduce dependencies drastically, and it was a type-clean approach that improved maintainability. But even at 1,000 commits I barely got to a 10% build speed improvement. Again this was not something I felt comfortable pushing upstream, or even announcing. :-/

然後基於第三次的成果覺得有望,意外的發現後續的速度改善比想像中的多非常多:

But the numbers were pretty clear: 20% performance gains were very much possible. So I kept developing this tree, and most of the speedups started arriving after over 1,500 commits, in the fall of 2021. I was very surprised when it went beyond 20% speedup and more, then arrived at the current 78% with my reference config. There's a clear super-linear improvement property of kernel build overhead, once the number of dependencies is reduced to the bare minimum.

這次的 patch 雖然超大包,但看起來對於 compile 時間改善非常多,應該會有不少討論... 消息還蠻新的 (台灣時間今天早上五點的信),晚點可以看一下其他大老出來回什麼...

在 ZFS 上跑 PostgreSQL 的調校

在「Everything I've seen on optimizing Postgres on ZFS」這邊看到如果要在 ZFS 上面跑 PostgreSQL 時的調校方式,看起來作者有一直在更新這篇,所以需要的時候可以跑去看...

主要的族群是要搞 self-hosted PostgreSQL 的人,相較於 ext4 或是 XFS,底層如果使用 ZFS 可以做許多事情,像是 compression 與 snapshot,這對於很多 DBA 相關的操作會方便不少,但也因為 ZFS 的關係,兩邊 (& PostgreSQL) 需要一起調整以確保效能...

不過短期應該還是用 RDS 就是了...

Linux Kernel 裡的 RNG 從 SHA-1 換成 BLAKE2s

Hacker News Daily 上看到的消息,Linux Kernel 裡的 RNG,裡面用到的 SHA-1 演算法換成 BLAKE2s 了:

SHA-1 已知的問題是個隱患,不過換成 BLAKE2s 應該是 maintainer 的偏好,Jason Donenfeld 在 WireGuard 裡面也是用 BLAKE2s...

用 Exodus 打包 Linux ELF 檔案到其他機器上

前幾天在 Hacker News Daily 上看到的工具:「Exodus」,官方的說明是這樣:

Painless relocation of Linux binaries–and all of their dependencies–without containers.

技術上是把 Linux ELF 檔案搬到其他機器上以外,也幫你把對應的 dynamic library 都一起包進去:

  • Finding and bundling all of a binary's dependencies.
  • Launching the binary in such a way that the proper dependencies are used without any potential interaction from system libraries on the destination machine.

而 Linux 的 Kernel 因為有儘量維持 ABI compatibility,應該是不會有太大的問題,除非剛好用到新的 API...

看起來是個除了用 static compile 以外的解法,好像可以來弄弄看 FFmpeg