Cloudflare 推出在 HTTPS 下的壓縮機制

在 TLS (HTTPS) 環境下基本上都不能開壓縮,主要是為了避免 secret token 會因為 dictionary 的可預測性而被取出,像是 CRIMEBREACHTIMEHEIST (沒完結過...),而因為全面關閉壓縮,對於效能的影響很大。

Cloudflare 就試著去找方法,是否可以維持壓縮,但又不會洩漏 secret token 的方式,於是就有了這篇:「A Solution to Compression Oracles on the Web」。

重點在於 Our Solution 這段的開頭:

We decided to use selective compression, compressing only non-secret parts of a page, in order to stop the extraction of secret information from a page.

透過 regex 判斷那些東西屬於 secret token,然後對這些資料例外處理不要壓縮,而其他的部份就可以維持壓縮。這樣傳輸量仍然可以大幅下降,但不透漏 secret token。然後因為這個想法其實很特別,沒有被實證過,所以成立了 Challenge Site 讓大家打:

We have set up the challenge website compression.website with protection, and a clone of the site compression.website/unsafe without it. The page is a simple form with a per-client CSRF designed to emulate common CSRF protection. Using the example attack presented with the library we have shown that we are able to extract the CSRF from the size of request responses in the unprotected variant but we have not been able to extract it on the protected site. We welcome attempts to extract the CSRF without access to the unencrypted response.

這個方向如果可行的話,應該會有人發展一些標準讓 compression algorithm 不用猜哪些是 secret token,這樣一來就更能確保因為漏判而造成的 leaking...

Netflix 在 Time Series Data Storage 上的努力...

在「Scaling Time Series Data Storage — Part I」這篇看到 Netflix 在 Time Series Data Storage 上所做的努力...

因為應用在寫多讀少的場景,所以選擇使用 Cassandra,遇到瓶頸後把常寫入的與不太會改變的拆開儲存,並且用不同方式最佳化。包括了 cache 與 compression 都拿出來用了...

不知道他們內部有沒有評估 ScyllaDB 的想法...

基於 RNN 的無損壓縮

Hacker News 上看到「DeepZip: Lossless Compression using Recurrent Networks」這篇論文,利用 RNN 幫助壓縮技術壓的更小,而程式碼在 GitHubkedartatwawadi/NN_compression 上有公開讓大家可以測試。

裡面有個比較特別的是,Lagged Fibonacci PRNG 產生出來的資料居然有很好的壓縮率,這在傳統的壓縮方式應該都是幾乎沒有壓縮率...

整體的壓縮率都還不錯,不過比較的對象只有 gzip,沒有拿比較先進的壓縮軟體進行比較) 像是 xz 之類的),看數字猜測在一般的情況下應該不會贏太多,不過光是 PRNG 那部份,這篇論文等於是給了一個不同的方向讓大家玩...

Amazon API Gateway 支援壓縮了...

Amazon API Gateway 支援壓縮了:「Amazon API Gateway Supports Content Encoding for API Responses」。

You can now enable content encoding support for API Responses in Amazon API Gateway. Content encoding allows API clients to request content to be compressed before being sent back in the response to an API request. This reduces the amount of data that is sent from API Gateway to API clients and decreases the time it takes to transfer the data. You can enable content encoding in the API definition. You can also set the minimum response size that triggers compression. By default, APIs do not have content encoding support enabled.

打開後傳回的資料就會自動壓縮了,然後還可以設定觸發的 response size... 依照文件 (Content Codings Supported by API Gateway),目前支援的壓縮格式應該是最常見的 gzipdeflate

這功能好像是一開始有 API Gateway 就一直被提出來的 feature request...

curl 將支援 Brotli 壓縮

Twitter 上看到有人提到 curl 支援 Brotli 了:「HTTP: implement Brotli content encoding」。

Brotli 對文字系列的資料比較有幫助 (像是 html):

Unlike most general purpose compression algorithms, Brotli uses a pre-defined 120 kilobyte dictionary, in addition to the dynamically populated ("sliding window") dictionary. The pre-defined dictionary contains over 13000 common words, phrases and other substrings derived from a large corpus of text and HTML documents. Using a pre-defined dictionary has been shown to increase compression where a file mostly contains commonly-used words.

現在還在 master 裡面,之後的 release 版本應該就會支援了...

Amazon Redshift 壓縮率的改善

Amazon Redshift 對壓縮率的改善:「Data Compression Improvements in Amazon Redshift Bring Compression Ratios Up to 4x」。

首先是引入了 Zstandard

First, we added support for the Zstandard compression algorithm, which offers a good balance between a high compression ratio and speed in build 1.0.1172. When applied to raw data in the standard TPC-DS, 3 TB benchmark, Zstandard achieves 65% reduction in disk space. Zstandard is broadly applicable.

然後是自動選擇壓縮,對於之前沒有設定壓縮參數的人,會直接有改善:

Second, we’ve improved the automation of compression on tables created by the CREATE TABLE AS, CREATE TABLE or ALTER TABLE ADD COLUMN commands. Starting with Build 1.0.1161, Amazon Redshift automatically chooses a default compression for the columns created by those commands. Automated compression happens when we estimate that we can reduce disk space without degrading query performance. Our customers have seen up to 40% reduction in disk space.

再來是改善資料結構:

Third, we’ve been optimizing our internal on-disk data structures. Our preview customers averaged a 7% reduction in disk space usage with this improvement. This feature is delivered starting with Build 1.0.1271.

最後是提供更好的分析判斷:

Finally, we have enhanced the ANALYZE COMPRESSION command to estimate disk space reduction.

不過其他幾個產品線的使用方式更成熟 (像是 Amazon Athena 這類產品),不知道會不會讓 Amazon Redshift 慢慢退出第一線...

eBay 把 MongoDB 當 cache layer 的用法...

在「How eBay’s Shopping Cart used compression techniques to solve network I/O bottlenecks」這邊 eBay 描述了他們怎麼解決在 MongoDB 上遇到的問題,不過我看的是他們怎麼用 MongoDB,而不是這次解決的問題:

It’s easier to think of the MongoDB layer as a “cache” and the Oracle store as the persistent copy. If there’s a cache miss (that is, missing data in MongoDB), the services fall back to recover the data from Oracle and make further downstream calls to recompute the cart.

把 MongoDB 當作 cache layer,當 cache miss 的時候還是會回去底層的 Oracle 撈資料計算,這用法頗有趣的...

不拿 memcached 出來用的原因不知道是為什麼,是要找個有 HA 方案的 cache layer 嗎?還是有針對 JSON document 做判斷操作?

Pinterest 對 InnoDB 壓縮的改善

三個月前 Pinterest 提到對 InnoDB 壓縮的改善,講到透過字典的改善方式:「Pinterest 在 InnoDB Compression 的努力」。

而在「Evolving MySQL Compression - Part 2」這邊繼續說明要怎麼生出對 Pinterest 比較有效的字典內容,作者把計算的工具放到 GitHub 上讓其他人可以用 (用 Python 寫的):「pinterest/mysql_utils/zdict_gen/」。

可以看出來又增加不少壓縮率,這算是針對資料庫壓縮從 A 到 A+ 的行為吧...

Amazon Redshift 支援 Zstandard

Amazon Redshift 支援 Zstandard 壓縮資料:「Amazon Redshift now supports the Zstandard high data compression encoding and two new aggregate functions」。

Zstandard 是 Facebook 的人發展出來的壓縮與解壓縮方式,對比的對象主要是 zlib (或者說 gzip),官網上有不少比較圖。目標是希望在同樣的壓縮處理速度下,可以得到更好的壓縮率。

Redshift 支援 Zstandard 等於是讓現有使用 gzip 的使用者免費升級的感覺...

Pinterest 在 InnoDB Compression 的努力

Pinterest 用 InnoDB 儲存各式資料,而且使用了 InnoDB Compression 的功能。他們花了不少力氣跟 Percona 合作改善 InnoDB Compression 的效能:「Evolving MySQL Compression - Part 1」。

文章有點長度,重點在於他們在 MySQL 裡面放了大量的 JSON:

A Pin is stored as a 1.2 KB JSON blob in sharded MySQL databases.

他們發現新版 zlib 的 predefined dictionary 可以讓壓縮率變得更高 (從本來的 ~50% 到 ~66%);而除了壓縮率變高外,由於事先定義了字典內容,對於效能的提昇也不少 (warm up):

Zlib version 1.2.7.1 was released in early 2013 and added the ability to use a predefined “dictionary” to prefill the lookback window for LZ77. This seemed promising since we could “warm up” the lookback window with field names and other common strings. We ran a few tests using the Python Zlib library with a naive predefined dictionary consisting of an arbitrary Pin JSON blob. The compression savings increased from ~50% to ~66% at what appeared to be relatively little cost.

另外他們做了 read-only 的 benchmark (畢竟這是重點)。圖片資料有點糊,但可以看出 y 軸是 Queries/sec。而 x 軸上則用文字給了些說明,黃色是 TokuDB,紅色是本來的 InnoDB Compression,剩下的都是不同的字典集的成果:

Below is a graph from our presentation which showed a read-only version of our production workload at concurrency of 256, 128, 32, 16, 8, 4 and 1 clients. TokuDB is in yellow, InnoDB page compression is in red and the other lines are column compression with a variety of dictionaries.

整體效率都比之前高不少,尤其是當 concurrent query 的數量偏高的時候差距會很大。

而這個功能將會納入未來的 Percona 版本,對於在 MySQL 裡面會塞 JSON 或是 XML 的人應該會很有幫助:

We worked with Percona to create a specification for column compression with an optional predefined dictionary and then contracted with Percona to build the feature.