英國政府開始定時掃描英國內的網路服務

Hacker News Daily 上看到的,英國政府也建立了自己的掃描資料庫:「NCSC Scanning information」。

看起來是類似於 Shodan 這樣的服務,但是範圍限制在英國內的網路服務:

These activities cover any internet-accessible system that is hosted within the UK and vulnerabilities that are common or particularly important due to their high impact.

不過這個東西各國應該都有做,只是有沒有公開講?

GitHub 的 API Token 換格式

GitHub 前幾天宣佈更換 API token 的格式:「Authentication token format updates are generally available」,在今年三月初的時候有先公告要換:「Authentication token format updates」。

另外昨天也解釋了換成這樣的優點:「Behind GitHub’s new authentication token formats」。

首先是 token 的字元集合變大了:

The character set changed from [a-f0-9] to [A-Za-z0-9_]

另外是增加了 prefix 直接指出是什麼種類的 token:

The format now includes a prefix for each token type:

  • ghp_ for Personal Access Tokens
  • gho_ for OAuth Access tokens
  • ghu_ for GitHub App user-to-server tokens
  • ghs_ for GitHub App server-to-server tokens
  • ghr_ for GitHub App refresh tokens

另外官方目前先不會改變 token 長度 (透過字元變多增加 entropy),但未來有打算要增加:

The length of our tokens is remaining the same for now. However, GitHub tokens will likely increase in length in future updates, so integrators should plan to support tokens up to 255 characters after June 1, 2021.

看起來當初當作 hex string 而轉成 binary 會有問題,不過就算這樣做應該也是轉的回來的。

回到好處的部份,這個作法跟 SlackStripe 類似,讓開發者或是管理者更容易辨識 token 的類型:

As we see across the industry from companies like Slack and Stripe, token prefixes are a clear way to make tokens identifiable. We are including specific 3 letter prefixes to represent each token, starting with a company signifier, gh, and the first letter of the token type.

另外這也讓 secret scanning 的準確度更高,本來是 40 bytes 的 hex string,有機會撞到程式碼內的 SHA-1 string:

Many of our old authentication token formats are hex-encoded 40 character strings that are indistinguishable from other encoded data like SHA hashes. These have several limitations, such as inefficient or even inaccurate detection of compromised tokens for our secret scanning feature.

另外官方也建議現有的 token 換成新的格式,這樣如果真的發生洩漏,可以透過 secret scanning 偵測並通知:

We strongly encourage you to reset any personal access tokens and OAuth tokens you have. These improvements help secret scanning detection and will help you mitigate any risk to compromised tokens.

Go 1.9 的 GC 改善

Update:被提醒後仔細看了一下,是 1.8 預設生效 (但保留選項切回來 debug),如果沒問題的話 1.9 把舊的方式拔乾淨:

Assuming things go smoothly, we will remove stack re-scanning support when the tree opens for Go 1.9 development.

標題就不改了... 以下原文。

在「Sub-millisecond GC pauses」這邊看到的。Golang 想辦法將 GC 造成的影響降低:「Proposal: Eliminate STW stack re-scanning」。

目標是解決最大的 GC pause 來源:

As of Go 1.7, the one remaining source of unbounded and potentially non-trivial stop-the-world (STW) time is stack re-scanning.

然後拿新的解法來戰,目前初步的測試看起來可以降到 50µs (== 0.05ms):

We propose to eliminate the need for stack re-scanning by switching to a hybrid write barrier that combines a Yuasa-style deletion write barrier [Yuasa '90] and a Dijkstra-style insertion write barrier [Dijkstra '78]. Preliminary experiments show that this can reduce worst-case STW time to under 50µs, and this approach may make it practical to eliminate STW mark termination altogether.

在「runtime: eliminate stack rescanning · Issue #17503 · golang/go」這邊可以看到進度,現在已經在 master branch 上了,看起來會在 1.9 的時候被放出來... 不過 worst case 的時間上修了 XDDD

The high level summary is that this reduces worst-case STW time to about 100 µs and typical 95%ile STW time to 50 µs (assuming, of course, that the OS doesn't get in the way and that the system isn't otherwise overloaded).

但看起來應該還是很大的效能改善,尤其是 CPU bound 的應用?

另外一篇講文件掃描的...

在「Page dewarping」這篇看到講文件掃描的技術,以及 open source 的程式,對比之前提到的「Dropbox 的文件掃描功能」與「Dropbox 的 Document Detecting」的時間點,有種淡淡的惡意 XD

這篇作者是為了未婚妻的需求而寫出來的,本來是作者收到學生的作業時手動在跑,後來未婚妻也拿去用,但量愈來愈大,決定自動化處理:

A while back, I wrote a script to create PDFs from photos of hand-written text. It was nothing special – just adaptive thresholding and combining multiple images into a PDF – but it came in handy whenever a student emailed me their homework as a pile of JPEGs. After I demoed the program to my fiancée, she ended up asking me to run it from time to time on photos of archival documents for her linguistics research. This summer, she came back from the library with a number of images where the text was significantly warped due to curled pages.

So I decided to write a program that automatically turns pictures like the one on the left below to the one on the right:

程式都可以在 GitHub 上翻到:「Text page dewarping using a "cubic sheet" model」。跟 Dropbox 互別苗頭的感覺 XDDD

Dropbox 的 Document Detecting

Dropbox 發表了他們所研究的 Document Detecting 技術:「Fast and Accurate Document Detection for Scanning」。

他們希望抓出這張圖裡面「文件」的「邊緣」:

Canny edge detector 會跑出這樣,很明顯多了很多不太正確的邊線,對於後續判斷上會困難不少:

剛好也是最近看到的另外一篇文章「Image Kernels Explained Visually」在講 Image Kernel,有些地方有點類似的東西,交叉看頗有感覺的...

Anyway,Dropbox 最後的成果很不錯啊,可以看示範: