解決 Ubuntu 重開機後麥克風聲音太小的問題

Ubuntu 桌機重開機後會遇到外接 USB 麥克風 AM310 的聲音會變得太小的問題,常常是開會的時候被同事提醒才去調。

查了一下是不是有 bug,看起來跟「Mic input volume always resets (to middle/low value) on resume or restart」這個 bug 有關,但這個回報是 16.04,到現在 22.04 都出了,好像沒有新進度...

接著就是找看看有沒有 workaround 可以用,其中一種想法是找出用 command line 設定音量的方式,這樣就可以在開機的時候自動執行。

接著就找到「How can i increase microphone volume beyond 100%」這個問答,首先用這個指定列出所有的 source:

pactl list sources

裡面可以看到 AM310 的資料,接著就可以透過 name 的部份指定音量了:

pactl set-source-volume alsa_input.usb-AVerMedia_AVerMedia_AM310_USB_Microphone-00.multichannel-input 70%

放到 startup script 在 login 的時候跑就 OK 了。

用 dig 查瑞士的 top domain 剛好會遇到的 "feature"

Hacker News 上看到「DNS Esoterica - Why you can't dig Switzerland」這篇,裡面提到 dig 的 "feature"。

拿來查 tw 的 NS 會這樣下:

$ dig tw ns

結果會是列出所有的 NS server:

;; ANSWER SECTION:
tw.                     3600    IN      NS      h.dns.tw.
tw.                     3600    IN      NS      a.dns.tw.
tw.                     3600    IN      NS      g.dns.tw.
tw.                     3600    IN      NS      d.dns.tw.
tw.                     3600    IN      NS      anytld.apnic.net.
tw.                     3600    IN      NS      f.dns.tw.
tw.                     3600    IN      NS      b.dns.tw.
tw.                     3600    IN      NS      e.dns.tw.
tw.                     3600    IN      NS      c.dns.tw.
tw.                     3600    IN      NS      ns.twnic.net.

照著作者說的,ukdig uk ns 可以得到類似的結果:

;; ANSWER SECTION:
uk.                     86400   IN      NS      dns1.nic.uk.
uk.                     86400   IN      NS      dns4.nic.uk.
uk.                     86400   IN      NS      nsa.nic.uk.
uk.                     86400   IN      NS      nsb.nic.uk.
uk.                     86400   IN      NS      nsc.nic.uk.
uk.                     86400   IN      NS      nsd.nic.uk.
uk.                     86400   IN      NS      dns3.nic.uk.
uk.                     86400   IN      NS      dns2.nic.uk.

但如果你下 dig ch ns 就會出現錯誤,像是這樣:

; <<>> DiG 9.16.1-Ubuntu <<>> ch ns
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 5019
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;.                              CH      NS

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fri Jul 15 06:54:24 CST 2022
;; MSG SIZE  rcvd: 28

原因是因為 CH 這個關鍵字是 Chaosnet 的縮寫,而被特殊解讀:

Set the query class. The default class is IN; other classes are HS for Hesiod records or CH for Chaosnet records.

要避開這個解讀需要加上一個 dot (.),採用 FQDN 的方式列出:

dig ch. ns

就會得到正確的結果:

;; ANSWER SECTION:
ch.                     86400   IN      NS      a.nic.ch.
ch.                     86400   IN      NS      b.nic.ch.
ch.                     86400   IN      NS      f.nic.ch.
ch.                     86400   IN      NS      d.nic.ch.
ch.                     86400   IN      NS      e.nic.ch.

另外的方式是 dig -c IN -t NS ch,透過參數的方式讓 dig 不會誤會。

Hacker News 前幾天炸很久的 root cause

前幾天 Hacker News 炸了很久,如果是從 Twitter 上的資料來看,是從 2022/07/08 14:08 UTC 這篇:

中間還原失敗 (2022/07/08 17:35 UTC):

到最後恢復 (2022/07/08 20:48 UTC):

Twitter 這邊的資料看起來差不多是六個小時多,以一個應該是只有 database 需要還原的站台來說的確是蠻久的,所以後續在「HN is up again」這邊就有在討論原因,裡面 HN 的老大 dang 也有提到 downtime 是七個小時多:

8 hours of downtime, but not data loss, since there was no data to lose during the downtime.

Last post before we went down (2022-07-08 12:46:04 UTC): https://news.ycombinator.com/item?id=32026565

First post once we were back up (2022-07-08 20:30:55 UTC): https://news.ycombinator.com/item?id=32026571 (hey, that's this thread! how'd you do that, tpmx?)

So, 7h 45m of downtime. What we don't know is how many posts (or votes, etc.) happened after our last backup, and were therefore lost. The latest vote we have was at 2022-07-08 12:46:05 UTC, which is about the same as the last post.

There can't be many lost posts or votes, though, because I checked HN Search (https://hn.algolia.com/) just before we brought HN back up, and their most recent comment and story were behind ours. That means our last backup on the ill-fated server was taken after the last API update (HN Search relies on our API), and the API gets updated every 30 seconds.

I'm not saying that's a rock-solid argument, but it suggests that 30 seconds is an upper bound on how much data we lost.

另外大家就在找 dang 的回應是什麼 (畢竟是第一手資料),用 Ctrl-F 找一下就看到有趣的猜測,從 32028511 這個節點可以看到這串有趣的討論,首先是 mikeiem

You are never going to guess how long the HN SSDs were in the servers... never ever... OK... I'll tell you: 4.5years. I am not even kidding.

然後是 kabdib 的回應:

Let me narrow my guess: They hit 4 years, 206 days and 16 hours . . . or 40,000 hours.

And that they were sold by HP or Dell, and manufactured by SanDisk.

Do I win a prize?

(None of us win prizes on this one).

接著就是 dang 說他覺得這個猜測很有可能:

Wow. It's possible that you have nailed this.

Edit: here's why I like this theory. I don't believe that the two disks had similar levels of wear, because the primary server would get more writes than the standby, and we switched between the two so rarely. The idea that they would have failed within hours of each other because of wear doesn't seem plausible.

But the two servers were set up at the same time, and it's possible that the two SSDs had been manufactured around the same time (same make and model). The idea that they hit the 40,000 hour mark within a few hours of each other seems entirely plausible.

Mike of M5 (mikiem in this thread) told us today that it "smelled like a timing issue" to him, and that is squarely in this territory.

後續他也從自家的 /newest 裡面撈了相關的資料出來,依照他撈出來的關鍵字,看起來是用 HPE 出的 SSD:

It's also an example of the dharma of /newest – the rising and falling away of stories that get no attention:

HPE releases urgent fix to stop enterprise SSDs conking out at 40K hours - https://news.ycombinator.com/item?id=22706968 - March 2020 (0 comments)

HPE SSD flaw will brick hardware after 40k hours - https://news.ycombinator.com/item?id=22697758 - March 2020 (0 comments)

Some HP Enterprise SSD will brick after 40000 hours without update - https://news.ycombinator.com/item?id=22697001 - March 2020 (1 comment)

HPE Warns of New Firmware Flaw That Bricks SSDs After 40k Hours of Use - https://news.ycombinator.com/item?id=22692611 - March 2020 (0 comments)

HPE Warns of New Bug That Kills SSD Drives After 40k Hours - https://news.ycombinator.com/item?id=22680420 - March 2020 (0 comments)

(there's also https://news.ycombinator.com/item?id=32035934, but that was submitted today)

這次 downtime 看起來很像是中了 SSD firmware bug,目前看起來先搬到 EC2 上面了:

$ host news.ycombinator.com
news.ycombinator.com has address 50.112.136.166
$ host 50.112.136.166      
166.136.112.50.in-addr.arpa domain name pointer ec2-50-112-136-166.us-west-2.compute.amazonaws.com.

看討論串應該是暫時性的?

這次 OpenSSL 的兩個 CVE

難得在 Hacker News 首頁上看到 OpenSSLCVE:「OpenSSL Security Advisory [5 July 2022]」,相關的討論在「OpenSSL Security Advisory (openssl.org)」。

第一個 CVE 是 RCE 等級,但觸發條件有點多:

首先是 RSA 2048bits,這個條件應該算容易發生的。

第二個是,因為這個安全問題是因為 OpenSSL 3.0.4 才引入的程式碼,而 OpenSSL 3.0.4 是 2022/06/21 發表的,未必有很多人有升級。

第三個是,因為這次出包的段落是用到了 AVX-512 指令集,一定要 Intel 或是 Centaur 的 CPU,後面這家公司前身就是威盛 (VIA) 的一員,去年賣給了 Intel (然後發現連官網用的 domain 都沒續約...)。

AMD 雖然在 Zen 4 架構上支援 AVX-512,但還沒推出產品,所以直接閃避 XD

另外第三個還有額外的限制,因為這次用到的是 IFMA 指令集,所以也不是所有有支援 AVX-512 的 CPU 都會中獎:

只看 Intel 的部份,第一個支援 IFMA 的是 2018 年推出的 Cannon Lake,這個架構只有一顆行動版的 Intel® Core™ i3-8121U Processor

真正大量支援 IFMA 的是 2019 後的 Intel CPU 了,但到了去年推出的 Alder Lake 因為 E-core 不支援 AVX-512 的關係 (但 P-core 支援),預設又關掉了。

所以如果問這個 bug 嚴不嚴重,當然是很嚴重,但影響範圍就有點微妙了。

接下來講第二個 CVE,是 AES OCB 的實做問題,比較有趣的地方是 Hacker News 上的討論引出了 Mosh 的作者跳出來說明,他居然提到他們在二月的時候試著換到 OpenSSL 的 AES OCB 時有測出這個 bug,被 test case 擋下來了:

Mosh uses AES-OCB (and has since 2011), and we found this bug when we tried to switch over to the OpenSSL implementation (away from our own ocb.cc taken from the original authors) and Launchpad ran it through our CI testsuite as part of the Mosh dev PPA build for i686 Ubuntu. (It wasn't caught by GitHub Actions because it only happens on 32-bit x86.) https://github.com/mobile-shell/mosh/issues/1174 for more.

So I would say (a) OCB is widely used, at least by the ~million Mosh users on various platforms, and (b) this episode somewhat reinforces my (perhaps overweight already) paranoia about depending on other people's code or the blast radius of even well-meaning pull requests. (We really wanted to switch over to the OpenSSL implementation rather than shipping our own, in part because ours was depending on some OpenSSL AES primitives that OpenSSL recently deprecated for external users.)

Maybe one lesson here is that many people believe in the benefits of unit tests for their own code, but we're not as thorough or experienced in writing acceptance tests for our dependencies.

Mosh got lucky this time that we had pretty good tests that exercised the library enough to find this bug, and we run them as part of the package build, but it's not that farfetched to imagine that we might have users on a platform that we don't build a package for (and therefore don't run our testsuite on).

這有點有趣 XDDD

Laravel 將不會有 LTS 版本

查資料的時候發現,在 Laravel 9 剛發佈的時候是有掛 LTS 版本的資訊 (從「Laravel 9 (LTS) 出了」這邊的截圖可以看到),但在發佈後沒多就就被拿掉了,在 Taylor OtwellTwitter 上有提到這件事情:

從幾個 forum 討論的態度上看起來以後不會出新的 LTS 版本了,之後的版本都是提供一年的 bug fix + security fix,再加上另外一年的 security fix,基本上有兩年的 support,算是半強迫開發者時間到了就要升級版本...

另外一個有看到的問題是,現在的 Laravel 9 支援的 PHP 版本因為底層 Symfony 要 PHP 8.0+ 關係也一起被拉上來,連 PHP 7.4 都不支援了:

這個靠「***** The main PPA for supported PHP versions with many PECL extensions *****」這類 3rd-party repository 來補是還能解,但感覺 Symfony 對這些問題的態度...

Google Docs 裡 Grammar Correction 的 bug

剛剛在 Hacker News 上看到有趣的 bug,在 Google Docs 上輸入 And. And. And. And. And. 會觸發 error:「Including “And. And. And. And. And.” in a Google doc causes it to crash (support.google.com)」,原始的 bug report 在「Including "And. And. And. And. And." in a Google doc causes it to crash.」這邊,錯誤訊息像是這樣:

Hacker News 上的討論有提到這需要開 grammar check 的功能,然後看起來只要有相同的五個字開頭都大寫就會發生,像是 Also, Therefore, And, Anyway, But, Who, Why. 這些:

Also, Therefore, And, Anyway, But, Who, Why.

Each in caps 5 times with the same word with a period and space after each word and newline at the end is what I have found so far.

Can anyone find others?

Edit: added words that work found in other comments

很有趣的 bug XDDD 然後目前在 Hacker News 首頁的第一名...

Kagi 的宗教戰爭:Emacs 與 Vi

目前都是用 Kagi 當作預設的搜尋引擎,然後 Kagi 習慣每個禮拜會給一個 Changelog... 而這個禮拜的 Changelog 是這樣:

我好像看到了什麼不得了的東西:

Searching for emacs redirects to vi #327 @yjp20

然後 bug report 裡面提到了他會在搜尋 Emacs 時提示 Vi

然後搜尋 Vi 時提示 Emacs:

這是想要掀起什麼宗教戰爭嗎 XDDD

最近 Linux 核心安全性問題的 Dirty Pipe 故事很有趣...

Hacker News 上看到「The Dirty Pipe Vulnerability」這個 Linux kernel 的安全性問題,Hacker News 上相關的討論在「The Dirty Pipe Vulnerability (cm4all.com)」這邊可以看到。

這次出包的是 splice() 的問題,先講他寫出可重製 bug 的程式碼,首先是第一個程式用 user1 放著跑:

#include <unistd.h>
int main(int argc, char **argv) {
  for (;;) write(1, "AAAAA", 5);
}
// ./writer >foo

然後第二個程式也放著跑 (可以是不同的 user2,完全無法碰到 user1 的權限):

#define _GNU_SOURCE
#include <unistd.h>
#include <fcntl.h>
int main(int argc, char **argv) {
  for (;;) {
    splice(0, 0, 1, 0, 2, 0);
    write(1, "BBBBB", 5);
  }
}
// ./splicer <foo |cat >/dev/null

理論上不會在 foo 裡面看到任何 BBBBB 的字串,但卻打穿了... 透過 git bisect 的檢查,他也確認了是在「pipe: merge anon_pipe_buf*_ops」這個 commit 時出的問題。

不過找到問題的過程拉的頗長,一開始是有 web hosting 服務的 support ticket 說 access log 下載下來發現爛掉了,無法解壓縮:

It all started a year ago with a support ticket about corrupt files. A customer complained that the access logs they downloaded could not be decompressed. And indeed, there was a corrupt log file on one of the log servers; it could be decompressed, but gzip reported a CRC error.

然後他先手動處理就把票關起來了:

I fixed the file’s CRC manually, closed the ticket, and soon forgot about the problem.

接下來過幾個月後又發生,經過幾次的 support ticket 後他手上就有一些「資料」可以看:

Months later, this happened again and yet again. Every time, the file’s contents looked correct, only the CRC at the end of the file was wrong. Now, with several corrupt files, I was able to dig deeper and found a surprising kind of corruption. A pattern emerged.

然後因為發生的頻率也不是很高,加上邏輯上卡到死胡同,所以他也沒有辦法花太多時間在上面:

None of this made sense, but new support tickets kept coming in (at a very slow rate). There was some systematic problem, but I just couldn’t get a grip on it. That gave me a lot of frustration, but I was busy with other tasks, and I kept pushing this file corruption problem to the back of my queue.

後來真的花時間下去找,利用先前的 pattern 掃了一次系統 log,發現有規律在:

External pressure brought this problem back into my consciousness. I scanned the whole hard disk for corrupt files (which took two days), hoping for more patterns to emerge. And indeed, there was a pattern:

  • there were 37 corrupt files within the past 3 months
  • they occurred on 22 unique days
  • 18 of those days have 1 corruption
  • 1 day has 2 corruptions (2021-11-21)
  • 1 day has 7 corruptions (2021-11-30)
  • 1 day has 6 corruptions (2021-12-31)
  • 1 day has 4 corruptions (2022-01-31)

The last day of each month is clearly the one which most corruptions occur.

然後就試著寫各種 reproducible code,最後成功的版本就是開頭提到的,然後他發現這個漏洞可以是 security vulnerability,就回報出去了,可以看到前後從第一次的 support ticket 到最後解決花了快一年的時間,不過 Linux kernel 端修正的速度蠻快的:

  • 2021-04-29: first support ticket about file corruption
  • 2022-02-19: file corruption problem identified as Linux kernel bug, which turned out to be an exploitable vulnerability
  • 2022-02-20: bug report, exploit and patch sent to the Linux kernel security team
  • 2022-02-21: bug reproduced on Google Pixel 6; bug report sent to the Android Security Team
  • 2022-02-21: patch sent to LKML (without vulnerability details) as suggested by Linus Torvalds, Willy Tarreau and Al Viro
  • 2022-02-23: Linux stable releases with my bug fix (5.16.11, 5.15.25, 5.10.102)
  • 2022-02-24: Google merges my bug fix into the Android kernel
  • 2022-02-28: notified the linux-distros mailing list
  • 2022-03-07: public disclosure

整個故事還蠻精彩的 XD

Mac 上 sprintf 的 scalability 問題

Hacker News 上看到個有趣的 scalability 問題,在 Mac 上的 sprintf() 因為有 lock 造成的 scalability 問題:「Curious lack of sprintf scaling (aras-p.info)」。

作者注意到 Mac 在多 CPU 下 sprintf() 會有 scalability 的問題,要注意到這邊的 Y 軸是對數比例:

用了 std::stringstream << 反而更慢 (作者還酸了一句「Zero cost abstractions」):

然後用了 Instruments 跑 profiling 找問題,可以看到看起來跟 locale 有關:

一般的情況下應該不會是問題,但如果是需要大量 sprintf() 組字串的人就會比較要注意了。

在「What else can we do?」這段有提到一些解法,包括了 stb_sprintf 當作替代品,以及 {fmt} 作為 iostreams 的替代品,然後另外是利用 to_chars 來解決,如果只是要把數字轉成字串。

算是蠻有趣的 bug hunting 過程,對於開發者來說,一般性的重點還是在 profiling,找到對的問題然後再往下提出解法...

Mutt 現有維護者的節能宣告

Hacker News 上看到「Mutt 2.2.0 (mutt.org)」這篇,Mutt 的維護者 (應該是 Kevin McCarthy) 將只會負責 fix 類的事情了:

This obviously isn't a feature, but I wanted to mention that I will be moving away from Mutt maintainership after this release. There isn't a transition plan, so I'll keep maintaining the 2.2.x series with bug fixes and security issues.

It's been my pleasure to keep the releases coming since version 1.5.24. Unfortunately the past year, my time and energy available has been decreasing. So my plan is to focus the time I do have on keeping Mutt stable, secure, and bug free; until someone else has the desire to head up (and support) new-feature releases. Thank you everyone!

有人在討論串裡提到了 NeoMutt,也許之後會找機會看看...

目前還是有一套 email system 是跑在 Postfix + Procmail + Bogofilter + Mutt 上面,短期內應該不會換...