Linux Kernel 裡的 RNG 從 SHA-1 換成 BLAKE2s

Hacker News Daily 上看到的消息,Linux Kernel 裡的 RNG,裡面用到的 SHA-1 演算法換成 BLAKE2s 了:

SHA-1 已知的問題是個隱患,不過換成 BLAKE2s 應該是 maintainer 的偏好,Jason Donenfeld 在 WireGuard 裡面也是用 BLAKE2s...

AWS 要推出 Graviton3 的機種了

AWS 打算要推出 Graviton3 的機種了,目前還在 preview 階段:「Join the Preview – Amazon EC2 C7g Instances Powered by New AWS Graviton3 Processors」。

目前是宣稱與前一代的 Graviton2 相比有 25% 的效能提昇,另外在浮點數與密碼相關的運算上面也會有改善 (這個效能提昇的數字應該是有指令集的幫助):

In comparison to the Graviton2, the Graviton3 will deliver up to 25% more compute performance and up to twice as much floating point & cryptographic performance. On the machine learning side, Graviton3 includes support for bfloat16 data and will be able to deliver up to 3x better performance.

另外提到了 signed pointer,可以避免 stack 被搞,不過這邊需要 OS 與 compiler 的支援,算是針對 stack 類的攻擊提出的防禦方案:

Graviton3 processors also include a new pointer authentication feature that is designed to improve security. Before return addresses are pushed on to the stack, they are first signed with a secret key and additional context information, including the current value of the stack pointer. When the signed addresses are popped off the stack, they are validated before being used. An exception is raised if the address is not valid, thereby blocking attacks that work by overwriting the stack contents with the address of harmful code. We are working with operating system and compiler developers to add additional support for this feature, so please get in touch if this is of interest to you.

然後是使用 DDR5 的記憶體:

C7g instances will be available in multiple sizes (including bare metal), and are the first in the cloud industry to be equipped with DDR5 memory. In addition to drawing less power, this memory delivers 50% higher bandwidth than the DDR4 memory used in the current generation of EC2 instances.

現在還沒看到價錢,不過有可能是跟 c6g 一樣的價位?但考慮到記憶體換架構,也有可能是貴一些的?

另外翻了一下資料,ARM 有發表過新聞稿提到 Graviton2 是 ARM 的 Cortex-M55 機種:「Designing Arm Cortex-M55 CPU on Arm Neoverse powered AWS Graviton2 Processors」,這次的 Graviton3 應該在之後完整公開後會有更多消息出來...

來看 Intel + Varnish 的單機 500Gbps 的 PR 新聞稿

在「Varnish Software Achieves 500Gbps Throughput Per Server for UHD Video Content」這邊看到 PR 稿,由 IntelVarnish 合作,宣稱達到單機 500Gbps 的 throughput 了:

According to Varnish Software, the following were the outcomes of the test:

  • 509.7 Gbps live-linear throughput, using a dual-processor configuration
  • 487.2 Gbps video-on-demand throughput, using a dual-processor configuration

白皮書在「Delivering up to 500 Gbps Throughput for Next-Gen CDNs」這頁可以用個資交換下載,不過用搜尋引擎找一下可以發現 Intel 那邊有放出 PDF (但不確定兩邊給的是不是同一份):「Delivering up to 500 Gbps Throughput for Next-Gen CDNs」。

單 CPU 的伺服器是四個 100Gbps 界面接出來,雙 CPU 的伺服器是八個 (這邊 SUT 是 system under test 的縮寫):

These client systems were connected to the CDN servers using 100 GbE links through a switch; 4x100 GbE connections for the single-processor SUT, and 8x100 GbE for the dualprocessor SUT. Testing was done using Wrk, a widely recognized open-source HTTP(S) benchmarking tool.

不過如果實際看圖會發現伺服器是兩個 100Gbps (單 CPU) 與四個 100Gbps (雙 CPU),然後 wrk 也吃了兩個或是四個 100Gbps:

在白皮書最後面也有提到測試的配置,都是在 Ubuntu 20.04 上面跑,單 CPU 用的是兩張 Intel 的 100Gbps 網卡,雙 CPU 的用的是四張 Mellanox 的 100Gbps 網卡:

3rd generation Intel Xeon Scalable testing done by Intel in September 2021. Single processor SUT configuration was based on the Supermicro SMC 110P-WTR-TNR single socket server based on Intel® Xeon® Platinum 8380 processor (microcode: 0xd000280) with 40 cores operating at 2.3 GHz. The server featured 256 GB of RAM. Intel® Hyper-Threading Technology was enabled, as was Intel® Turbo Boost Technology 2.0. Platform controller hub was the Intel C620. NUMA balancing was enabled. BIOS version was 1.1. Network connectivity was provided by two 100 GbE Intel® Ethernet Network Adapters E810. 1.2 TB of boot storage was available via an Intel SSD. Application storage totaled 3.84TB per drive and was provided by 8 Intel P5510 SSDs. The operating system was Ubuntu Linux release 20.04 LTS with kernel 5.4.0-80 generic. Compiler GCC was version 9.3.0. The workload was wrk/master (April 17, 2019), and the version of Varnish was varnishplus-6.0.8r3. Openssl v1.1.1h was also used. All traffic from clients to SUT was encrypted via TLS.

3rd generation Intel Xeon Scalable testing done by Intel in September 2021. Dual processor SUT configuration was based on the Supermicro SMC 22OU-TNR dual socket server based on Intel® Xeon® Platinum 8380 processor (microcode: 0xd000280) with 40 cores operating at 2.3 GHz. The server featured 256 GB of RAM. Intel® Hyper-Threading Technology was enabled, as was Intel® Turbo Boost Technology 2.0. Platform controller hub was the Intel C620. NUMA balancing was enabled. BIOS version was 1.1. Network connectivity was provided by four 100 GbE Mellanox MCX516A-CDAT adapters. 1.2 TB of boot storage was available via an Intel SSD. Application storage totaled 3.84TB per drive and was provided by 12 Intel P5510 SSDs. The operating system was Ubuntu Linux release 20.04 LTS with kernel 5.4.0-80- generic. Compiler GCC was version 9.3.0. The workload was wrk/master (April 17, 2019), and the version of Varnish was varnish-plus6.0.8r3. Openssl v1.1.1h was also used. All traffic from clients to SUT was encrypted via TLS.

不過馬上就會滿頭問號,四張 100Gbps 是怎麼跑到 500Gbps 的頻寬...

這份 PR 馬上就讓人想到 Netflix 先前放出來的投影片 (先前有在「Netflix 在單機服務 400Gbps 的影音流量」這篇提到),在 Netflix 的投影片裡面有提到他們在 Intel 平台上面受限於記憶體的頻寬,整台機器只能跑到 230Gbps。

另外一種猜測是,如果 Intel 與 Varnish 宣稱的 500Gbps 是算 switch 上的總流量 (有這樣算的嗎,你是 Juniper 嗎...),那這邊的 500Gbps 換算回去差不多就是減半 (還很客氣的沒把 cache 沒中需要去 origin server 拉資料的流量扣掉),跟 Netflix 在 FreeBSD 上跑出來的結果差不多啊...

坐等反駁 XDDD

QOI 圖片無損壓縮演算法

Hacker News Daily 上看到「Lossless Image Compression in O(n) Time」這篇,作者丟出了一個圖片的無損壓縮演算法,壓縮與解壓縮的速度超快,但壓縮率又不輸 PNG 太多,在 Hacker News 上的討論也可以看一下:「QOI: Lossless Image Compression in O(n) Time (phoboslab.org)」。

裡面有提到在遊戲產業常用到的 stb_image.h

Yes, stb_image saved us all from the pains of dealing with libpng and is therefore used in countless games and apps. A while ago I aimed to do the same for video with pl_mpeg, with some success.

作者的簡介也可以看到他的主業也在遊戲這塊:

My name is Dominic Szablewski. I build games, experiment with JavaScript and occasionally tinker with low-level C.

圖片的無損壓縮與解壓縮算是遊戲創作者蠻常用到的功能,所以他想要看看這塊有沒有機會有更好的工具,於是他就用了四個很簡單的演算法幹完了 QOI (然後發現效果很讚):

  • A run of the previous pixel
  • An index into a previously seen pixel
  • The difference to the previous pixel
  • Full rgba values

其實從 Hacker News 的討論也可以看到這組演算法也常被拿出來在現代的壓縮演算法使用,所以雖然作者自稱不是 compression guy,但他用的演算法其實蠻專業的...

然後挑 single thread 主要是可以避免 threading 的複雜度以及 overhead,在「QOI Benchmark Results」這頁可以看到,無論是什麼類型的檔案,壓縮與解壓縮的速度都相當漂亮,而且壓縮率又沒有差 libpng 太多。

而且作者自己有提到,還沒用到 SIMD 指令集加速,這樣猜測應該還有不少空間...

Fork 自微軟的 Pyjion 專案的 Python 3.10 + JIT 方案

Hacker News 上看到「Pyjion – A Python JIT Compiler (trypyjion.com)」這個專案,也是一個想要透過 JIT 加速 Python 的專案:

Pyjion is a drop-in JIT Compiler for Python 3.10. It can be pip installed into a CPython 3.10 installation on Linux, Mac OS X, or Windows.

看了一下是從微軟的 Pyjion 專案 fork 出來的,原來的專案最後一次 commit 是一年前,而且專案也已經標示為 archived (read-only mode),但有留下轉移的說明,也就是上面提到的專案:

Development has moved to https://github.com/tonybaloney/Pyjion

可以看到大部分的效能都已經進入改善階段 (很多導入 JIT 的專案在初期時會先變慢):

跟其他的 JIT 方案相比,Pyjion 的目標是高度相容現有 Python 的程式,包括各種 extension,這點的確是在用 PyPy 這些軟體時的痛點沒錯...

看起來透過 pip 裝好後就可以直接 import 進來用,後續就會生效:

import pyjion; pyjion.enable()

另外提一下,翻 Hacker News 留言的時候翻到這個害我笑出來,有夠新 XD

zatarc 3 days ago | unvote | prev | next [–]

Pyjion requires: CPython 3.10 and .NET 6

.NET 6 Release: 19 hours ago (https://github.com/dotnet/core/blob/main/release-notes/6.0/6...)

... ok.

YJIT 帶給 Ruby 大量的效能提昇

Hacker News 首頁上看到的消息,由 Shopify 贊助的 YJIT 被 Ruby 官方接受了:「Merge YJIT: an in-process JIT compiler (github.com/ruby)」。

YJIT currently provides average speedups of 23% over the CRuby interpreter on realistic benchmarks, and near-instant warm-up time.

實做 YJIT 的 Maxime Chevalier-Boisvert 在他自己的 blog 上有提到這次的實做:「YJIT: Building a New JIT Compiler Inside CRuby」,裡面選擇的方法是他的 PhD 論文:「Simple and Effective Type Check Removal through Lazy Basic Block Versioning」。

可以看到在六月寫文章的時候,改善其實還沒這麼大,而且作者提到有不少可以再提昇的空間:

That being said, according to our benchmarks, we’ve been able to achieve speedups over the CRuby interpreter of 7% on railsbench, 19% on liquid template rendering, and 19% on activerecord.

Currently, only about 50% of instructions in railsbench are executed by YJIT, and the rest run in the interpreter, meaning that there is still a lot we can do to improve upon our current results.

本來的 MJIT 看起來會慢慢淡出...

PyPy JIT 的改善

PyPy 這邊看到 JIT 的重大進展:「Better JIT Support for Auto-Generated Python Code」。

他們在 Tornado 上重製出來效能問題,後面也都是用這個例子在測試:

If you render a big HTML template (example) using the Tornado templating engine, the template rendering is really not any faster than CPython.

看起來上的 workaround 是在撞到 trace limit 時標記起來,之後再遇到時就可以跳進 special mode,接著處理下去避免浪費掉之前處理過的 trace:

After we have hit the trace limit and no inlining has happened so far, we mark the outermost function as a source of huge traces. The next time we trace such a function, we do so in a special mode. In that mode, hitting the trace limit behaves differently: Instead of stopping the tracer and throwing away the trace produced so far, we will use the unfinished trace to produce machine code.

效能可以看到改善很多:

看起來這個概念有打算在 3.8 的時候放進去:

The work described in this post tiny bit experimental still, but we will release it as part of the upcoming 3.8 beta release, to get some more experience with it. Please grab a 3.8 release candidate, try it out and let us know your observations, good and bad!

Django 的 template engine 不怎麼快,用 Jinja2 可能是一個方法,但既有的 project 如果有遇到 template engine 的效能問題,也許也可以翻看看 PyPy 解得如何...

Ubuntu 下的滑鼠滾輪速度

這陣子因為經常切回 WindowsD2R,發現 Windows 下的滾輪速度快多了,回到 Ubuntu 20.04 下發現無法調整滑鼠滾輪的速度,找了一些方案測試,發現居然地雷還是超多 XD

搜尋可以找到「Increase mouse wheel scroll speed」與「How to change my mouse wheel scroll rate?」這兩篇,被推最多的都是 imwheel,但這套軟體的最新版是 2004 年,實際上用就會發現配合現代的系統 bug 很多...

另外用的方案是「Mouse scroll wheel acceleration, implemented in user space」,作者用 Python 去控制加速,測了一下正常多了。範例給的 ./main.py -v --exp 1 其中的 --exp 1 實際用起來有點太快,我改成 0.75 比較習慣。

先照著作者提到的,把 dependency 都裝起來,接下來掛到 Session and Startup 裡面,在登入後跑起來就可以了:

用顏色區分程式碼裡面的括弧

Hacker News Daily 上看到 VSCode 改善了 bracket pair colorization 效率的文章,才想到我的 Vim 好像沒裝這個功能:「Bracket pair colorization 10,000x faster」。

VSCode 這邊主要是引入了新的資料結構改善了計算量,有興趣的可以看一下:

Efficient bracket pair colorization was a fun challenge. With the new data structures, we can also solve other problems related to bracket pairs more efficiently, such as general bracket matching or showing colored line scopes.

我這邊則是去找 Vim 上的套件,目前看到是「Rainbow Parentheses Improved」這個,裝起來拿 PHP 程式碼看了一下還行,這樣就不用在那邊算哪個左括弧對應到右括弧...

用 Ephemeral Storage 加速 MySQL over ZFS 的效能

Percona 的「MySQL/ZFS in the Cloud, Leveraging Ephemeral Storage」這篇裡面在探討是不是可以看看 ZFS 在 Ephemeral Storage (機器附的本地硬碟) 上的效能。

一開始測試是直接當主力硬碟來測,可以看到跑 ZFS 的情況下,本地的 storage 還是會比 SSD Premium (這是 Azure 的產品線) 還快不少:

但把資料放在本地的 storage 上其實有點刺激,至少在 production 應該不太會這樣搞,所以後面用 L2ARC 的方式來測,可以看到效率提昇相當明顯,甚至接近本來直接把資料放在本地的 storage:

另外測了 ext4/bcache,看起來效率就沒那麼好:

這樣看起來是個不錯的選擇...