Ken Thompson 的密碼

剛剛看到這串還蠻歡樂的...

起因於 BSD 3 的程式碼裡面有個 /etc/passwd,而且是帶有 crypt 的版本:「unix-history-repo/etc/passwd」。

裡面有蠻多密碼都已經被解出來了,但還是有些還沒解出來... 而最近的消息是 ken (Ken Thompson) 的密碼被解了出來:「Ken Thompson's Unix password」。

From: Nigel Williams <nw@retrocomputingtasmania.com>
Cc: TUHS main list <tuhs@minnie.tuhs.org>
Subject: Re: [TUHS] Recovered /etc/passwd files
Date: Wed, 9 Oct 2019 16:49:48 +1100

ken is done:

ZghOT0eRm4U9s:p/q2-q4!

took 4+ days on an AMD Radeon Vega64 running hashcat at about 930MH/s
during that time (those familiar know the hash-rate fluctuates and
slows down towards the end).

另外解出來的人也發現了這組密碼是一組西洋棋的 Descriptive notation,跟 Ken Thompson 的背景也相符:

From: Nigel Williams <nw@retrocomputingtasmania.com>
Cc: TUHS main list <tuhs@minnie.tuhs.org>
Subject: Re: [TUHS] Recovered /etc/passwd files
Date: Wed, 9 Oct 2019 16:52:00 +1100

On Wed, Oct 9, 2019 at 4:49 PM Nigel Williams
<nw@retrocomputingtasmania.com> wrote:
> ZghOT0eRm4U9s:p/q2-q4!

BTW, is that a chess move?

不過我覺得最好玩的是這個,不確定是不是本尊就是了:

From: Ken Thompson via TUHS <tuhs@minnie.tuhs.org>
To: Andy Kosela <akosela@andykosela.com>
Cc: TUHS main list <tuhs@minnie.tuhs.org>
Subject: Re: [TUHS] Recovered /etc/passwd files
Date: Wed, 9 Oct 2019 01:53:25 -0700

congrats.

On Wed, Oct 9, 2019 at 1:16 AM Andy Kosela <akosela@andykosela.com> wrote:
>
> On 10/9/19, Warner Losh <imp@bsdimp.com> wrote:
> > On Tue, Oct 8, 2019, 11:52 PM Nigel Williams
> > <nw@retrocomputingtasmania.com>
> > wrote:
> >
> >> On Wed, Oct 9, 2019 at 4:49 PM Nigel Williams
> >> <nw@retrocomputingtasmania.com> wrote:
> >> > ZghOT0eRm4U9s:p/q2-q4!
> >>
> >> BTW, is that a chess move?
> >>
> >
> > Most common opening.
> >
>
> Descriptive chess notation is not as popular today as it was back in
> the 70s, but it actually makes perfect sense as Ken is a long time
> chess enthusiast.
>
> --Andy

還有 Rob Pike 對這件事情不怎麼贊同的看法:

From: Rob Pike <robpike@gmail.com>
To: Nigel Williams <nw@retrocomputingtasmania.com>
Cc: TUHS main list <tuhs@minnie.tuhs.org>
Subject: Re: [TUHS] Recovered /etc/passwd files
Date: Wed, 9 Oct 2019 09:59:43 -1000

I coulda told you that. One tends to learn passwords (inadvertently) when
they're short and typed nearby often enough. (Sorry, ken.)

If I remember right, the first half of this password was on a t-shirt
commemorating Belle's first half-move, although its notation may have been
different.

Interesting though it is, though, I find this hacking distasteful. It was
distasteful back when, and it still is. The attitudes around hackery have
changed; the position nowadays seems to be that the bad guys are doing it
so the good guys should be rewarded for doing it first. That's disingenuous
at best, and dangerous at worst.

-rob


On Tue, Oct 8, 2019 at 7:50 PM Nigel Williams <nw@retrocomputingtasmania.com>
wrote:

> ken is done:
>
> ZghOT0eRm4U9s:p/q2-q4!
>
> took 4+ days on an AMD Radeon Vega64 running hashcat at about 930MH/s
> during that time (those familiar know the hash-rate fluctuates and
> slows down towards the end).
>

意外的引誘到一群人跑出來...

用更少訓練時間的 KataGo

最近開始在不同的地方會看到 KataGo 這個名字 (TwitterYouTube 上都有看到),翻了一下資料發現是在訓練成本上有重大突破,依照論文的宣稱快了五十倍...

在第一次跑的時候,只用了 35 張 V100 跑七天就有 Leela Zero 第 130 代的強度:

The first serious run of KataGo ran for 7 days in Februrary 2019 on up to 35xV100 GPUs. This is the run featured in the paper. It achieved close to LZ130 strength before it was halted, or up to just barely superhuman.

而第二次跑的時候用了 28 張 V100 跑 20 blocks 的訓練,跑了 19 天就已經超越 Facebook 當初提供的 ELFv2 版本,而對應到 Leela Zero 大約是第 200 代左右的強度 (要注意的是在 Leela Zero 這邊已經是用 40 blocks 的結構訓練了一陣子了):

Following some further improvements and much-improved hyperparameters, KataGo performed a second serious run in May-June a max of 28xV100 GPUs, surpassing the February run after just three and a half days. The run was halted after 19 days, with the final 20-block networks reaching a final strength slightly stronger than LZ-ELFv2! (This is Facebook's very strong 20-block ELF network, running on Leela Zero's search architecture). Comparing to the yet larger Leela Zero 40-block networks, KataGo's network falls somewhere around LZ200 at visit parity, despite only itself being 20 blocks.

從論文裡面可以看到,跟 Leela Zero 一樣是逐步提昇 (應該也是用 Net2Net),而不是一開始就拉到 20x256:

In KataGo’s main 19-day run, (b, c) began at (6, 96) and switched to (10, 128), (15, 192), and (20, 256), at roughly 0.75 days, 1.75 days, and 7.5 days, respectively. The final size approximately matches that of AlphaZero and ELF.

訓練速度上會有這麼大的改善,分成兩個類型,一種是一般性的 (在「Major General Improvements」這章),另外一類是特定於圍棋領域的改進 (在「Major Domain-Specific Improvements」這章)。

在 Leela Zero 的 issue tracking 裡面也可以看到很多關於 KataGo 的消息,看起來作者也在裡面一起討論,應該會有一些結果出來...

反過來在 ARM 上面跑 x86 模擬器...

看到「Box86 - Linux Userspace x86 Emulator with a twist, targeted at ARM Linux devices」這個專案,在 ARM 的機器上跑 x86 模擬器:

然後其實作者還弄了不少東西,像是透過 GL4ES 處理 OpenGL 的界面,其實前面做的功夫比想像中高不少...

然後從作者給的範例可以看出來主力在遊戲 XDDD

老闆走近時自動切換視窗的玩具...

看到「Daytripper: Hide-My-Windows Laser Tripwire」這個東西,可以偵測有人走過 (然後你就可以設定後續的行為):

由兩個裝置組成:

用電池供電,充滿可以供應 40 小時,然後再用 Type-C 充電...

行李箱搭乘新幹線的限制

東海道・山陽新幹線與九州新幹線在明年五月要導入大型行李管制機制:「新幹線へ持ち込みの大型荷物 予約制に」,官方的新聞稿在「東海道・山陽・九州新幹線特大荷物置場の設置と事前予約制の導入について」這邊。

限制的對象是長寬高三邊加起來超過 160cm,且在 250cm 以內的行李,會安排在空間讓你可以放置。事前預約不用錢,臨時則需要 1000 日幣:

JR東海などによりますと、事前の予約が必要になるのは東海道・山陽新幹線と九州新幹線で、荷物は車両の後部にあるスペースやデッキに設けられる予定の鍵付きの専用のコーナーに置きます。

対象となるのは、大型のスーツケースなど3辺の合計が160センチを超えて250センチまでの荷物で、窓口やインターネットなどで、事前に指定席とセットで予約する必要があります。

事前に予約をして荷物を持ち込んだ場合は無料ですが、予約をしていない場合には、有料となり、1000円を支払わなければなりません。

看了一下我自己有在用的 27 吋戰車規格,還在範圍內:

(外部) 高65cm(不含輪高) 寬45cm 厚29cm 重量5.2 kg
ABS 塑料

翻了「尺吋│28吋以上 - PChome 24h購物」這頁看了一下,29 吋有些有超過,有些沒超過,對於之後要去日本拉行李的人 (常發生在買 JR Pass,或是不同點進出的人) 應該要注意一下了...

目前公佈的是這兩個區段:

即時將動畫 Upscale 到 4K 畫質的演算法

看到「Anime4K」這個專案:

Anime4K is a state-of-the-art*, open-source, high-quality real-time anime upscaling algorithm that can be implemented in any programming language.

State of the art* as of August 2019 in the real time anime upscaling category, the fastest at acheiving reasonable quality. We do not claim this is a superior quality general purpose SISR algorithm compared to machine learning approaches.

他們提供的數據顯示 1080p -> 2160p (4K) 只要 3ms,對於 60fps 來說是相當足夠,而品質看起來也還不錯。

其中一個蠻有趣的問答是 1080p -> 2160p 反而比 480p -> 720p 簡單,因為 1080p 裡面因為有更多資料量,所以處理起來比較簡單:

Why not do PSNR/SSIM on 480p->720p upscaling
Story Time

Comparing PSNR/SSIM on 480p->720p upscales does not prove and is not a good indicator of 1080p->2160p upscaling quality. (Eg. poor performance of waifu2x on 1080p anime) 480p anime images have a lot of high frequency information (lines might be thinner than 1 pixel), while 1080p anime images have a lot of redundant information. 1080p->2160p upscaling on anime is thus objectively easier than 480p->720p.

用 Machine Learning 改善 Streaming 品質的服務與論文

Hacker News 上看到「Puffer」這個服務,裡面利用了 machine learning algorithm 動態調整 bitrate,以提昇傳輸品質。

測試得到的數據後來被整理起來一起放進論文:「Continual learning improves Internet video streaming」。

在開頭介紹了 Fugu 這個演算法:

We describe Fugu, a continual learning algorithm for bitrate selection in streaming video.

而 Puffer 就是實驗網站:

We evaluate Fugu with Puffer, a public website we built that streams live TV using Fugu and existing algorithms. Over a nine-day period in January 2019, Puffer streamed 8,131 hours of video to 3,719 unique users.

這個站台提供了許多真實的頻道進行測試:

Stream live TV in your browser. There's no charge. You can watch U.S. TV stations affiliated with the NBC, CBS, ABC, PBS, FOX, and Univision networks.

可以看到 Fugu 的結果很好,比起其他提出的方案是全面性的改善:

這邊是用 WebSocket 測試,並且配合了不同的 TCP congestion algorithm,沒有太考慮額外的計算成本...

AI 版的星海爭霸二將直接透過歐洲區的 Battle.net 匿名與人類對戰

前幾天 Blizzard 公佈的消息,DeepMind 的星海爭霸二 AI (AlphaStar) 將會透過 Blizzard 的 Battle.net 歐洲區伺服器跟人類對戰:「DeepMind Research on Ladder」。

Experimental versions of DeepMind’s StarCraft II agent, AlphaStar, will soon play a small number of games on the competitive ladder in Europe as part of ongoing research into AI.

預設是不會對到的,需要選擇參與:

If you would like the chance to help DeepMind with its research by matching against AlphaStar, you can opt in by clicking the “opt-in” button on the in-game popup window. You can alter your opt-in selection at any time by using the “DeepMind opt-in” button on the 1v1 Versus menu.

但你仍然不會知道對手是人還是 AI,而且如同一般對戰情況,這會影響到你的戰績:

For scientific test purposes, DeepMind will be benchmarking AlphaStar’s performance by playing anonymously during a series of blind trial matches. This means the StarCraft community will not know which matches AlphaStar is playing, to help ensure that all games are played under the same conditions. AlphaStar plays with built-in restrictions that the DeepMind team has defined in consultation with pro players. A win or a loss against AlphaStar will affect your MMR as normal.

okay,這樣大概知道為什麼只開放歐洲區了...

加州從今年七月開始,禁止 AI 偽裝成人類 (前幾天也有一些新聞在報導):「A California law now means chatbots have to disclose they’re not human」,對應的法條在「Bill Text - SB-1001 Bots: disclosure」這邊可以看到:

17941. (a) It shall be unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. A person using a bot shall not be liable under this section if the person discloses that it is a bot.

(b) The disclosure required by this section shall be clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot.

而加州是 Blizzard Entertainment 的總部...

法條上面對「online platform」有設計排除條款,不過如果只算星海二的人數,有可能不到這個豁免限制... 所以得避開而改用歐洲區來測試?

(c) “Online platform” means any public-facing Internet Web site, Web application, or digital application, including a social network or publication, that has 10,000,000 or more unique monthly United States visitors or users for a majority of months during the preceding 12 months.

(c) This chapter does not impose a duty on service providers of online platforms, including, but not limited to, Web hosting and Internet service providers.

美國軍方應該是超級關注這個議題,相較於 AlphaGo 或是 AlphaZero 是資訊完全透明的遊戲,這次要踏入非對稱資訊的遊戲。

如果在這個領域上有成果的話,可以預期未來的戰爭 (yeah 實體戰爭) 會開始大量採用 AI 了...