Home » Posts tagged "size"

基於 RNN 的無損壓縮

Hacker News 上看到「DeepZip: Lossless Compression using Recurrent Networks」這篇論文,利用 RNN 幫助壓縮技術壓的更小,而程式碼在 GitHubkedartatwawadi/NN_compression 上有公開讓大家可以測試。

裡面有個比較特別的是,Lagged Fibonacci PRNG 產生出來的資料居然有很好的壓縮率,這在傳統的壓縮方式應該都是幾乎沒有壓縮率...

整體的壓縮率都還不錯,不過比較的對象只有 gzip,沒有拿比較先進的壓縮軟體進行比較) 像是 xz 之類的),看數字猜測在一般的情況下應該不會贏太多,不過光是 PRNG 那部份,這篇論文等於是給了一個不同的方向讓大家玩...

Amazon RDS 支援更大的硬碟空間與更多的 IOPS

Amazon RDS 的升級:「Amazon RDS Now Supports Database Storage Size up to 16TB and Faster Scaling for MySQL, MariaDB, Oracle, and PostgreSQL Engines」。

空間上限從 6TB 變成 16TB,而且可以無痛升。另外 IOPS 上限從 30K 變成 40K:

Starting today, you can create Amazon RDS database instances for MySQL, MariaDB, Oracle, and PostgreSQL database engines with up to 16TB of storage. Existing database instances can also be scaled up to 16TB storage without any downtime.

The new storage limit is an increase from 6TB and is supported for Provisioned IOPS and General Purpose SSD storage types. You can also provision up to 40,000 IOPS for Provisioned IOPS storage volumes, an increase from 30,000 IOPS.

不過隔壁的 Amazon Aurora 還是大很多啊 (64TB),而且實際上不用管劃多大,他會自己長大:

Q: What are the minimum and maximum storage limits of an Amazon Aurora database?

The minimum storage is 10GB. Based on your database usage, your Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance. There is no need to provision storage in advance.

Microsoft 與 GitHub 合作,將會把 GVFS 移植到 Linux 與 Mac 上

MicrosoftGitHub 合作將本來只有在 Windows 上可以用的 GVFS 移植到 LinuxMac 上:「Microsoft and GitHub team up to take Git virtual file system to macOS, Linux」。

GVFS 是解決微軟內部自己在用 Git 的痛處,因為微軟的 repository 都... 有... 點... 肥... (畢竟有不少產品發展了很久)。

目前 Git 的操作是卡在 I/O 與 memory cache 的限制上:

Also, Git wasn't designed for a codebase that was so large, either in terms of the number of files and version history for each file, or in terms of sheer size, coming in at more than 300GB. When using standard Git, working with the source repository was unacceptably slow. Common operations (such as checking which files have been modified) would take multiple minutes.

GVFS 的想法是有用到的部份再真的去拉,藉此大幅減少 I/O 需求...

Twitter Moments 很大?

看到「How big is Twitter Moments?」這篇,在談 Twitter Moments

依照推算,Twitter Moments 的使用量應該比全世界任何一個媒體都大,但你會發現實際上沒有音量。沒有人談論他,沒有人引用他... 但估算起來他應該是超級大的產品?

有種「到底怎麼樣才算是一個成功的產品」的反思。

PS4 下載速度很慢的原因

在「Why PS4 downloads are so slow」這篇作者花了不少力氣找出原因,發現 PS4 下載速度很慢是故意的... 另外討論了在什麼情況下會變慢,以及要怎麼避免的方式。

懶得看的人可以直接看 Conculsions 那段,主要的原因是 PS4 會因為背景程式而調整 TCP window size (就算背景程式在 idle 也會影響到下載的 TCP window size),進而影響速度:

If any applications are running, the PS4 appears to change the settings for PSN store downloads, artificially restricting their speed. Closing the other applications will remove the limit.

用 TCP window size 來調整速度也算是頗有「創意」的方法...

Anyway,遇到時的解決方法就是把所有在跑的程式都完整關掉,再下載就會正常多了...

InnoDB redo log 大小對效能的影響

在「Benchmark(et)ing with InnoDB redo log size」這邊看到在討論 InnoDB redo log 的大小對效能的影響 (也就是 innodb_log_file_sizeinnodb_log_files_in_group)。

開頭就有先提到重點,在新版 MySQL 裡,幾乎所有的情況比較大的 redo log 有比較好的效能 (平均值):

tl;dr - conclusions specific to my test

  1. A larger redo log improves throughput
  2. A larger redo log helps more with slower storage than with faster storage because page writeback is more of a bottleneck with slower storage and a larger redo log reduces writeback.
  3. A larger redo log can help more when the working set is cached because there are no stalls from storage reads and storage writes are more likely to be a bottleneck.
  4. InnoDB in MySQL 5.7.17 is much faster than 5.6.35 in all cases except IO-bound + fast SSD

可以看出來平均效能的提昇很顯著,不管是增加 redo log 大小還是升級到 5.7:

但作者也遇到了奇怪的效能問題。雖然平均效能提昇得很顯著,但隨著加入資料的增加,效能的 degradation 其實很嚴重,在原來的網頁上可以看到這些資訊。

The results above show average throughput and that hides a lot of interesting behavior. We expect throughput over time to not suffer from variance -- for both InnoDB and for MyRocks. For many of the results below there is a lot of variance (jitter).

所以也許現階段先加大就好 (至少寫入的效能會提昇),不需要把這個特性當作升級 MySQL 的理由。

Google 的 Guetzli,對 JPEG 的壓縮演算法

Google Research Europe 推出的演算法,在不動 decoder 的情況下,要怎麼樣壓出又小又清晰的 JPEG 圖片:「Announcing Guetzli: A New Open Source JPEG Encoder」,論文可以在「Guetzli: Perceptually Guided JPEG Encoder」這邊下載,程式碼則可以在 GitHub 上的 google/guetzli 取得。

othree 也寫了一篇「Guetzli: A New Open Source JPEG Encoder」介紹 Guetzli。

Guetzli 在同樣的品質下,比現有的壓縮法可以再壓榨出 29%~45% 的空間,這算是非常驚人的數字:

We reach a 29-45% reduction in data size for a given perceptual distance, according to Butteraugli, in comparison to other compressors we tried.

但代價是需要極為大量的 CPU resource 計算,這使得他沒辦法用在太動態的環境裡:

Guetzli's computation is currently extremely slow, which limits its applicability to compressing static content and serving as a proof- of-concept that we can achieve significant reductions in size by combining advanced psychovisual models with lossy compression techniques.

但只要是在批次處理產生的過程 (不需要太在意要等很久),都可以考慮用這個工具...

EBS 有動態長大的功能了...

Amazon EBS 可以動態增加大小了,是個對不少人還蠻方便的功能:「Amazon Elastic Block Store (Amazon EBS) Enables Live Volume Modifications with Elastic Volumes」。

這邊講的沒有 downtime 當然還是得需要 filesystem 支援:

Today we are introducing the Elastic Volumes feature for Amazon Elastic Block Store (Amazon EBS). This new capability allows you to modify configurations of live volumes with a simple API call or a few console clicks. Elastic Volumes makes it easy to dynamically increase capacity, tune performance, and change the type of any new or existing current generation volume with no downtime or performance impact.

另外提到一個特殊的組合,是配合 CloudWatchLambda 調整:

You can streamline and automate changes using Amazon CloudWatch with AWS Lambda.

這方法頗有趣的 XDDD

Google 再次改善 Android 的 APK 更新,讓下載的量更小

Google 的人再次更新了演算法,將下載的量再次減少,從本來的 47% 降到 65%:「Saving Data: Reducing the size of App Updates by 65%」。

今年七月的時候,更新演算法導入了 bsdiff,使得本來要抓整包 APK 的量,變成抓 diff 的部份,這使得下載的流量降了 47%:「Improvements for smaller app downloads on Google Play」。

Using bsdiff, we were able to reduce the size of app updates on average by 47% compared to the full APK size.

現在則改成不直接對 APK 做 diff,而是對未壓縮的檔案做,再把差異包起來,則可以降到 65%:

Today, we're excited to share a new approach that goes further — File-by-File patching. App Updates using File-by-File patching are, on average, 65% smaller than the full app, and in some cases more than 90% smaller.

主要的原因在於 APK 的壓縮使用的 DEFLATE 演算法對於變更非常敏感,改變一個字元就會讓後續整串都改變,導致差異很大而跑 diff algorithm 的效果不好:

用 File-by-File 的好處主要來自於是對未壓縮的檔案比較差異,這代表沒有變動的檔案完全不會進來攪和,而對 binary 檔案的效果也比較好 (大部份的程式碼還是一樣)。不過這對於已經有壓縮的圖片的效果就比較差了,這也是 APK 一般肥大常見的原因。

有兩件事情值得注意的,一個是 Google 的人為了使用者體驗,只有在 auto update 時才會走 File-by-File 的更新,主要原因是 File-by-File 的解開速度慢不少:

For now, we are limiting the use of this new patching technology to auto-updates only, i.e. the updates that take place in the background, usually at night when your phone is plugged into power and you're not likely to be using it. This ensures that users won't have to wait any longer than usual for an update to finish when manually updating an app.

另外一個是,這個新方式讓 Google 每天省下 6PB 的流量,如果流量都是平均打散的話,大約是 600Gbps:

The savings, compared to our previous approach, add up to 6 petabytes of user data saved per day!

這種規模改善起來很有感覺 XDDD

Archives