Let's Encrypt 生了新的 Root 與 Intermediate Certificate

Let's Encrypt 弄了新的 Root Certificate 與 Intermediate Certificate:「Let's Encrypt's New Root and Intermediate Certificates」。

一方面是本來的 Intermediate Certificate 也快要要過期了,另外一方面是要利用 ECDSA 降低傳輸時的頻寬成本:

On Thursday, September 3rd, 2020, Let’s Encrypt issued six new certificates: one root, four intermediates, and one cross-sign. These new certificates are part of our larger plan to improve privacy on the web, by making ECDSA end-entity certificates widely available, and by making certificates smaller.

本來有 Let's Encrypt Authority {X1,X2,X3,X4} 四組 Intermediate Certificate,都是 RSA 2048 bits。

其中 X1 與 X2 差不多都到期了 (cross-signed 的已經過了,自家 ISRG Root X1 簽的剩不到一個月),不過這兩組已經沒在用了,這次就不管他了。

而 X3 與 X4 這兩組則是明年到期,會產生出新的 Intermediate Certificate,會叫做 R3 與 R4,跟之前一樣會被自家 ISRG Root X1 簽,以及 IdenTrust DST Root CA X3 簽:

For starters, we’ve issued two new 2048-bit RSA intermediates which we’re calling R3 and R4. These are both issued by ISRG Root X1, and have 5-year lifetimes. They will also be cross-signed by IdenTrust. They’re basically direct replacements for our current X3 and X4, which are expiring in a year. We expect to switch our primary issuance pipeline to use R3 later this year, which won’t have any real effect on issuance or renewal.

然後是本次的重頭戲,會弄出一個新的 Root Certificate,叫做 ISRG Root X2,以及兩個 Intermediate Certificate,叫做 E1 與 E2:

The other new certificates are more interesting. First up, we have the new ISRG Root X2, which has an ECDSA P-384 key instead of RSA, and is valid until 2040. Issued from that, we have two new intermediates, E1 and E2, which are both also ECDSA and are valid for 5 years.

主要的目的就是降低 TLS 連線時的 bandwidth,這次的設計預期可以降低將近 400 bytes:

While a 2048-bit RSA public key is about 256 bytes long, an ECDSA P-384 public key is only about 48 bytes. Similarly, the RSA signature will be another 256 bytes, while the ECDSA signature will only be 96 bytes. Factoring in some additional overhead, that’s a savings of nearly 400 bytes per certificate. Multiply that by how many certificates are in your chain, and how many connections you get in a day, and the bandwidth savings add up fast.

另外一個特別的修改是把名字改短 (把「Let's Encrypt Authority」拿掉),也是為了省傳輸的成本:

As an aside: since we’re concerned about certificate sizes, we’ve also taken a few other measures to save bytes in our new certificates. We’ve shortened their Subject Common Names from “Let’s Encrypt Authority X3” to just “R3”, relying on the previously-redundant Organization Name field to supply the words “Let’s Encrypt”. We’ve shortened their Authority Information Access Issuer and CRL Distribution Point URLs, and we’ve dropped their CPS and OCSP urls entirely. All of this adds up to another approximately 120 bytes of savings without making any substantive change to the useful information in the certificate.

這個部份讓我想到之前寫的「省頻寬的方法:終極版本...」這篇,裡面提到 AWS 自家的 SSL Certificate 太胖,改用 DigiCert 的反而可以省下不少錢 XDDD

另外也提到了這次 cross-sign 的部份是對 ECDSA Root Certificate 簽 (ISRG Root X2),而不是對 ECDSA Intermediate Certificate 簽 (E1 與 E2),主因是不希望多一次切換的轉移期:

In the end, we decided that providing the option of all-ECDSA chains was more important, and so opted to go with the first option, and cross-sign the ISRG Root X2 itself.

這算是蠻重要的進展,看起來各家 client 最近應該都會推出新版支援。

MyRocks/MariaDB 的 tuning 過程

看起來應該是找 Percona 的人幫忙轉移到 MyRocks 上,然後整理出來的成功案例:「The Road Story of a MyRocks/MariaDB Migration」。

看起來是跑在獨立機器上,而不是雲端的虛擬機上,所以不是想 scale up 就可以把硬體規格拉上去 (說不定記憶體插槽已經滿了之類的...):

Replicas run on bare metal servers, usually Dual Xeon E5 v3 or v4, with 192 GB to 384 GB of RAM.

這次遇到的主要的問題是發現效能跟不上。另外在文章裡面沒寫到,但可以猜到的是,他們目前不打算改架構,而是想要藉由改善資料庫的效能來解決問題:

The servers were close to their limits and were slow to catch up with replication after a maintenance period

後面可以看到不少過程,主要是重新編一份 MariaDB,讓 MyRocks 支援 Zstandard (MyRocks 支援 Zstandard,不過 MariaDB 內的 MyRocks 不知道為什麼關掉了...),這點大幅降低了空間的佔用。

另外是遇到 OOM 問題,在改用 jemalloc 解決記憶體用量的問題後就解決了 (這個在使用 InnoDB 的時候也算是標配了)。

不過在「Increased Read Load Over Time」那段還是看到了 workaround:

The read load was still rising a bit but at a much smaller pace. Instead of hours, it was days. That’s kind of expected given the workload and we were already planning for periodic manual compactions.

目前看起來 MyRocks 的強項主要是在省資源,但缺點就是有不少眉眉角角得小心處理。這樣的話,一般應該還是會先用 InnoDB,真的搞大了再考慮要不要換過去...

JavaScript 的壓縮器 esbuild

esbuild 是個 JavaScript bundler & minifier,在 GitHub 上的副標提到了重點在於速度:

An extremely fast JavaScript bundler and minifier

從壓縮時間可以看出來優勢:

另外從最終的檔案大小也可以看出來,與最小的 rollup + terser 組合沒有差太多:

實際拿個 jQuery 跑看看,可以看出來壓縮的效果還行:

-rw-r--r-- 1 gslin staff  89228 Feb 19 06:03 jquery-3.4.1-esbuild.min.js
-rw-r--r-- 1 gslin staff 280364 May  2  2019 jquery-3.4.1.js
-rw-r--r-- 1 gslin staff  88145 May  2  2019 jquery-3.4.1.min.js

速度主要是透過 Golang 並且平行化運算達到的:

  • It's written in Go, a language that compiles to native code
  • Parsing, printing, and source map generation are all fully parallelized
  • Everything is done in very few passes without expensive data transformations
  • Code is written with speed in mind, and tries to avoid unnecessary allocations

不過作者有提到這個專案畢竟比較新,還沒有被時間磨練過,可能會有些 bug:

This is a hobby project that I wrote over the 2019-2020 winter break. I believe that it's relatively complete and functional. However, it's brand new code and probably has a lot of bugs. It also hasn't yet been used in production by anyone. Use at your own risk.

可以先放一陣子看看,讓一些先賢先烈把比較大的 bug 踩一踩修一修...

省頻寬的方法:終極版本...

看到「Three ways to reduce the costs of your HTTP(S) API on AWS」這邊介紹在 AWS 上省頻寬費用的方法,看了只能一直笑 XD

第一個是降低 HTTP response 裡沒有用到的 header,因為每天有五十億個 HTTP request,所以只要省 1byte 就是省下 USD$0.25/day:

Since we would send this five billion times per day, every byte we could shave off would save five gigabytes of outgoing data, for a saving of 25 cents per day per byte removed.

然後調了一些參數後省下 USD$1,500/month:

Sending 109 bytes instead of 333 means saving $56 per day, or a bit over $1,500 per month.

第二個是想辦法在 TLS 這邊下手,一開始其中一個方向是利用 TLS session resumption 降低第二次連線的成本,但他們發現沒有什麼參數可以調整:

One thing that reduces handshake transfer size is TLS session resumption. Basically, when a client connects to the service for the second time, it can ask the server to resume the previous TLS session instead of starting a new one, meaning that it doesn’t have to send the certificate again. By looking at access logs, we found that 11% of requests were using a reused TLS session. However, we have a very diverse set of clients that we don’t have much control over, and we also couldn’t find any settings for the AWS Application Load Balancer for session cache size or similar, so there isn’t really anything we can do to affect this.

所以改成把 idle 時間拉長 (避免重新連線):

That leaves reducing the number of handshakes required by reducing the number of connections that the clients need to establish. The default setting for AWS load balancers is to close idle connections after 60 seconds, but it seems to be beneficial to raise this to 10 minutes. This reduced data transfer costs by an additional 8%.

再來是 AWS 本身發的 SSL certification 太肥,所以他們換成 DigiCert 發的,大幅降低憑證本身的大小,反而省下 USD$200/day:

So given that the clients establish approximately two billion connections per day, we’d expect to save four terabytes of outgoing data every day. The actual savings were closer to three terabytes, but this still reduced data transfer costs for a typical day by almost $200.

這些方法真的是頗有趣的 XDDD

不過這些方法也是在想辦法壓榨降低與 client 之間的傳輸量啦,比起成本來說反而是提昇網路反應速度...

Hacker News

早上看到「Tell HN: Thank you for not redesigning Hacker News」這篇,作者在網路速度受限的地區,上各種網站幾乎都不會動,但 Hacker News 沒有改用一堆前端框架,而是保留使用 HTML 反而讓頁面維持極小:

I’m currently in a country with low speed internet and the entire ‘modern’ web is basically unusable except HN, which still loads instantly. Reddit, Twitter, news and banking sites are all painfully slow or simply time out altogether.

To PG, the mods and whoever else is responsible: thank you for not trying to ‘fix’ what isn’t broken.

順手開了一下網路工具來看,發現單一元件最大的居然是 favicon XDDD:

Wikipedia 上列出來的相容性,如果只支援 IE11+ 的話,看起來可以改用 PNG,大小就已經有明顯的改善了:

-rw-r--r-- 1 gslin staff 7527 Sep  2 09:50 favicon.ico
-rw-r--r-- 1 gslin staff 2598 Sep  2 09:51 favicon.png

RFC8482 廢掉 DNS 查詢的 ANY query 了...

看到 Cloudflare 的「RFC8482 - Saying goodbye to ANY」這篇,裡面提到 RFC8482 廢掉了 ANY 查詢:「Providing Minimal-Sized Responses to DNS Queries That Have QTYPE=ANY」。

The Domain Name System (DNS) specifies a query type (QTYPE) "ANY". The operator of an authoritative DNS server might choose not to respond to such queries for reasons of local policy, motivated by security, performance, or other reasons.

對 Cloudflare 的痛點主要在於營運上的困難,因為 ANY 回應的 UDP packet size 很大,很容易造成放大攻擊:

把拒絕 ANY 查詢變成標準後,讓 DNS provider 手上多了一把武器可以用。

MySQL 8.0 的 innodb_dedicated_server

Percona 介紹了 MySQL 8.0 將會推出的 innodb_dedicated_server 參數:「New MySQL 8.0 innodb_dedicated_server Variable Optimizes InnoDB from the Get-Go」,Oracle 官方的文件在「15.6.13 Enabling Automatic Configuration for a Dedicated MySQL Server」這邊可以翻到。

這是針對整台機器完全給 MySQL 用的情況所設計的參數。在這種情況下,可以透過 RAM 的大小以及一些簡單的公式,得到還算堪用的系統參數...

依照說明,可以看到系統會依照記憶體的大小自動計算出 innodb_buffer_pool_sizeinnodb_log_file_size 這兩個參數,並且把 innodb_flush_method 設為 O_DIRECT_NO_FSYNC (如果所在平台有支援這個值)。

不過看了一下公式,依照經驗可以設的更積極一點... 像是 Percona 文章裡提到的,當記憶體夠大時,其實可以考慮從 80% 開始調整大小 (innodb_buffer_pool_size):

For InnoDB buffer pool size (based on this article), consider allocating 80% of physical RAM for starters. You can increase it to as large as needed and possible, as long as the system doesn’t swap on the production workload.

innodb_log_file_size 則應該要分析寫入的 pattern 而不是直接看 RAM 大小。有些機器雖然很大台但幾乎沒有寫入的量,照著公式的值就偏大很多:

For InnoDB log file size, it should be able to handle one hour of writes to allow InnoDB to optimize writing the redo log to disk. You can calculate an estimate by following the steps here, which samples one minute worth of writes to the redo log. You could also get a better estimate from hourly log file usage with Percona Monitoring and Management (PMM) graphs.

不過基本上 tune 出來的值還算堪用,對於剛入手的人頗有幫助。

Ubuntu 18.04 LTS Minimal Image 的大小

看到「RFC: Ubuntu 18.04 LTS Minimal Images」這篇,在蒐集將來要出的 Ubuntu 18.04 LTS Minimal Image 的意見...

The Ubuntu Minimal Image is the smallest base upon which a user can apt install any package in the Ubuntu archive.

雖然應該還會有改變,不過以目前的版本來看,可以看出壓縮前後兩種版本都比 16.04 小了不少:

對需要這些 image 的人來說 (像是當作 Docker 的 base image),小一點操作起來也比較開心...

Working Set Size (WSS) 的想法

NetflixBrendan Gregg (他比較知名的發明是 Flame Graph) 寫了一篇「How To Measure the Working Set Size on Linux」,他想要量測單位時間內會用到的記憶體區塊大小:

The Working Set Size (WSS) is how much memory an application needs to keep working. Your app may have populated 100 Gbytes of main memory, but only uses 50 Mbytes each second to do its job. That's the working set size. It is used for capacity planning and scalability analysis.

這可以拿來分析這些應用程式是否能夠利用 L1/L2/L3 cache 大幅增加執行速度,於是就可以做成圖,像是這樣:

在 Netflix 這樣人數的公司,需要設計一些有用的指標,另外發展出對應的工具,讓其他人更容易迅速掌握狀況,畢竟不是每個人都有上天下海的能力,遇到狀況可以馬上有頭緒進行 trouble shooting...

基於 RNN 的無損壓縮

Hacker News 上看到「DeepZip: Lossless Compression using Recurrent Networks」這篇論文,利用 RNN 幫助壓縮技術壓的更小,而程式碼在 GitHubkedartatwawadi/NN_compression 上有公開讓大家可以測試。

裡面有個比較特別的是,Lagged Fibonacci PRNG 產生出來的資料居然有很好的壓縮率,這在傳統的壓縮方式應該都是幾乎沒有壓縮率...

整體的壓縮率都還不錯,不過比較的對象只有 gzip,沒有拿比較先進的壓縮軟體進行比較) 像是 xz 之類的),看數字猜測在一般的情況下應該不會贏太多,不過光是 PRNG 那部份,這篇論文等於是給了一個不同的方向讓大家玩...