Home » Posts tagged "speed"

Cloudflare 的 jpegtran 在 ARM 上面的表現

Cloudflare 花了不少力氣在 ARM 的伺服器上 (可以參考「Cloudflare 用 ARM 當伺服器的進展...」,或是更早的「Cloudflare 測試 ARM 新的伺服器」這篇),最近在 ARM 上發現 jpegtran 的效能不是太好,花了不少力氣最佳化,發現有意外收穫:「NEON is the new black: fast JPEG optimization on ARM server」。

他們設的低標是讓每個 core 的效能大約在 Xeon 的 50%,但發現只有 26% 左右的效能:

Ideally we want to have the ARM performing at or above 50% of the Xeon performance per core. This would make sure we have no performance regressions, and net performance gain, since the ARM CPUs have double the core count as our current 2 socket setup.

In this case, however, I was disappointed to discover an almost 4X slowdown.

而他就想到這些圖形運算的程式應該早就在使用各種 SIMD 指令集加速,於是作者就想到,把 SSE 的最佳化部份 porting 到 ARM 上面的 NEON 說不定會有很大的幫助:

Not one to despair, I figured out that applying the same optimizations I did for Intel would be trivial. Surely the NEON instructions map neatly to the SSE instructions I used before?

而 porting 完後重新測試發現達到了 66% 的效能,已經超過本來的目標... 另外在批次處理中,也比 Xeon 快了:

繼續發研究時又發現 NEON 有一些在 SSE 沒有的指令 (沒有相似功能),也許能提供更進一步的加速:

While going over the ARMv8 NEON instruction set, I found several unique instructions, that have no equivalent in SSE.

如果再把這些指令實做出來,會發現單 core 的效能已經到 Xeon 的 83%,而批次的速度又提昇了不少:

最後是整台伺服器都跑滿時的測試,會發現整台的效能差不多 (其實 ARM 的版本還贏一些),但吃電量不到一半,而就算只拿他們常態在跑的 4 workers 來看 (應該是為了 latency 問題),用電效率來到 6.5 倍:

With the new implementation Centriq outperforms the Xeon at batch reduction for every number of workers. We usually run Polish with four workers, for which Centriq is now 1.3 times faster while also 6.5 times more power efficient.

這篇在提醒之後在 ARM 上寫最佳化時,不要只從 SSE porting 到 NEON,要多看一下有沒有其他指令集是有幫助的...

Percona 的人接受 AWS 的建議,重新測試了 Percona XtraDB Cluster 在 gp2 上的效能...

去年年底的時候 Percona 的人在 AWS 上測試 Percona XtraDB Cluster 的效能,尤其是針對底層應該選擇哪種 EBS 的部分給了一些建議。可以參考先前寫的「Percona 分析在 AWS 上跑 Percona XtraDB Cluster 的效能 (I/O bound)」這篇。

當時的建議是用 io1,雖然是比較貴,但對於效能比較好。

而後來 Percona 的人收到 AWS 工程師的建議,可以用另外一個方式,可以在 gp2 上拉出類似的效能,但成本會比 io1 低不少:「Percona XtraDB Cluster on Amazon GP2 Volumes」。

這個方式是利用 gp2 會依照空間大小,計算可用的 IOPS。在官方的文件裡是這樣描述 gp2 的效能 (IOPS):

General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time. Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 10,000 IOPS (at 3,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size. AWS designs gp2 volumes to deliver the provisioned performance 99% of the time. A gp2 volume can range in size from 1 GiB to 16 TiB.

在這個前提下,需要 10000 IOPS 的效能會需要 3.3TB 以上的空間,所以 Percona 就被 AWS 的工程師建議直接拉高空間重新測試:

After publishing our material, Amazon engineers pointed that we should try GP2 volumes with the size allocated to provide 10000 IOPS. If we allocated volumes with size 3.3 TiB or more, we should achieve 10000 IOPS.

首先是測出來的效能,可以看到沒有太大差異:

接下來就比較儲存成本,大約是 io1 版本的一半價錢:

如上面文件中提到的,gp1 不完全保證效能,但統計出來經常能夠提供出 3 IOPS/GB 的效能。而 io1 則是保證效能,不太需要擔心效能不穩定的問題。就是這個差異,反應到成本上面就有蠻大的差距。善用這點設計系統,應該會對整體成本有蠻大的幫助... (但對 latency 就未必了,尤其是 P99 之類的數值)

算是另外一種搞法讓大家可以考慮...

Facebook 在南韓因為太慢被罰錢???

看到「South Korea fines Facebook $369K for slowing user internet connections」這則新聞,裡面提到 Facebook 的 reroute 行為:

The Korea Communications Commission (KCC) began investigating Facebook last May and found that the company had illegally limited user access, as reported by ABC News. Local South Korean laws prohibit internet services from rerouting users’ connections to networks in Hong Kong and US instead of local ISPs without notifying those users. In a few cases, such rerouting slowed down users’ connections by as much as 4.5 times.

沒有告知使用者就導去香港或是美國的伺服器,聽起來像是 GeoDNS 的架構,以及 Facebook 的 CDN 架構幹的事情?不過在原報導裡面,另外一個指控是:

The KCC probed claims that Facebook intentionally slowed access while it negotiated network usage fees with internet service providers.

另外南韓官方也不承認使用者條款內的告知有效的:

Facebook said it did not violate the law in part because its terms of use say it cannot guarantee its services will operate without delays or interference. KCC officials rejected that argument, saying the terms were unfair. It recommended the company amend its terms of use.

現在看起來應該是要打官司?

Raspberry Pi 3 的新版本 Model B+

Raspberry Pi 3 推出了 Model B+ 的新版本:「Raspberry Pi 3 Model B+ on sale now at $35」。

除了 CPU 速度稍微快一些以外,另外支援了 802.11ac/5Ghz 的無線網路 (官方宣稱可以跑到 ~102Mbps,相較於先前在 2.4Ghz 只能跑到 ~35Mbps),以及更快的有線網路 (官方宣稱可以跑到 ~315Mbps,相較於先前的 ~95Mbps)。

然後是支援 PXE

Raspberry Pi 3B was our first product to support PXE Ethernet boot. Testing it in the wild shook out a number of compatibility issues with particular switches and traffic environments. Gordon has rolled up fixes for all known issues into the BCM2837B0 boot ROM, and PXE boot is now enabled by default.

以及支援 PoE 直接推動整台機器:

We use a magjack that supports Power over Ethernet (PoE), and bring the relevant signals to a new 4-pin header. We will shortly launch a PoE HAT which can generate the 5V necessary to power the Raspberry Pi from the 48V PoE supply.

或是吃更多電 XDDD

Note that Raspberry Pi 3B+ does consume substantially more power than its predecessor. We strongly encourage you to use a high-quality 2.5A power supply, such as the official Raspberry Pi Universal Power Supply.

所以看到這張圖時就不意外了 XDDD (風扇!)

只是風扇的細節要再找一下,在產品頁上好像沒看到...

Update:風扇那張圖的產品頁看起來在「Raspberry Pi PoE HAT」這頁 (參考下面的 comment)。

Windows 上的 Chrome 改用 Clang 編譯

這應該是上個禮拜蠻熱鬧的一件事情 (i.e. 很多人看熱鬧 XD),就是 Google ChromeVisual C++ 改成用 Clang 編譯:「Clang is now used to build Chrome for Windows」。

文章裡推敲效能應該不是主要的因素,因為在不同項目測試下各有千秋,而且差距都不大:

We conducted extensive A/B testing of performance. Performance telemetry numbers are about the same for MSVC-built and clang-built Chrome – some metrics get better, some get worse, but all of them are within 5% of each other.

後面有猜測換過去的原因,可以看出因為 open source 加上 Google Chrome 團隊有強大的技術能力下,用 open source 軟體可以對 compiler 大量客製化各種功能,另外也是因為一個 compiler 就可以吃多個平台,可以省下一些跨平台的力氣 (像是相容性語法)。

而 Visual C++ 在商業支援與文件兩方面比較好的優勢,在這個情況下就顯得不是那麼重要了...

Instagram 解決 Cassandra 效能問題的方法

在解決 Cassandra 效能問題中大概就 ScyllaDB 特別有名,用 C++ 重寫一次使得效能大幅改善。而 Instagram 的人則是把底層的資料結構換掉,改用 RocksDB (這公司真的很愛自家的 RocksDB...):「Open-sourcing a 10x reduction in Apache Cassandra tail latency」。

主要原因是他們發現 Cassandra 在處理資料的部份會有 JVM 的 GC 問題,而且是導致 Cassandra 效能差的主要原因:

Apache Cassandra is a distributed database with it’s own LSM tree-based storage engine written in Java. We found that the components in the storage engine, like memtable, compaction, read/write path, etc., created a lot of objects in the Java heap and generated a lot of overhead to JVM.

然後在換完後測試可以看到效能大幅提昇,也可以看到 GC 的延遲大幅降低:

In one of our production clusters, the P99 read latency dropped from 60ms to 20ms. We also observed that the GC stalls on that cluster dropped from 2.5% to 0.3%, which was a 10X reduction!

比較一下這兩者的差異:在 ScyllaDB 是全部都用 C++ 改寫 (資料結構不換),這樣就直接解決掉 JVM 的 GC 問題。在 Rocksandra 則是在 profiling 後挑重點換掉 (這邊看起來是處理資料的 code,直接換成 RocksDB),另外順便把一些界面抽象化... 兩個不一樣的解法,都解決了 JVM 的 GC 問題。

nginx 的 HTTP/2 要支援 Server Push 了

Twitter 上看到 nginx 的 HTTP/2 也要支援 server push 的消息了:

看起來是只要送出對應的 HTTP Header,後續 nginx 就會幫你處理...

這功能總算是要進 nginx 了... 像是透過 cookie 判斷使用者是第一次瀏覽,就透過 server push 預先把 css/js 丟出去,加速頁面呈現。

KPTI (Meltdown Mitigation) 對 MyISAM 的痛點

MariaDB 的「MyISAM and KPTI – Performance Implications From The Meltdown Fix」這篇看到頗驚人的數字,這篇提到了他們收到回報 (回報的 ticket 可以參考「[MDEV-15072] Massive performance impact after PTI fix - JIRA」),說 KPTI (Meltdown Mitigation) 對 MyISAM 效能影響巨大:

Recently we had a report from a user who had seen a stunning 90% performance regression after upgrading his server to a Linux kernel with KPTI (kernel page-table isolation – a remedy for the Meltdown vulnerability).

他們發現 90% 是因為 VMware 舊版本無法使用 CPU feature 加速,在新版應該可以改善不少。但即使如此,文章內還是在實體機器上看到了 40% 的效能損失:

A big deal of those 90% was caused by running in an old version of VMware which doesn’t pass the PCID and INVPCID capabilities of the CPU to the guest. But I could reproduce a regression around 40% even on bare metal.

然後後面就在推銷 MariaDB 的 Aria Storage Engine 了,不是那麼重要... 不過知道 MyISAM 在 KPTI 下這麼傷還蠻重要的,因為接下來五年應該都還是愈的到 KPTI,應該還是有人在用 MyISAM...

不同性質的應用程式對 KPTI (Meltdown 修正) 的效能影響

NetflixBrendan Gregg 整理了他測試 KPTI 對效能的影響:「KPTI/KAISER Meltdown Initial Performance Regressions」。

與其他人只是概括的測試,他主要是想要針對可量測的數字對應出可能的 overhead,這樣一來還沒上 patch 的人就可以利用這些量測數字猜測可能的效能衝擊。

他把結論放在前面:

To understand the KPTI overhead, there are at least five factors at play. In summary:

  • Syscall rate: there are overheads relative to the syscall rate, although high rates are needed for this to be noticable. At 50k syscalls/sec per CPU the overhead may be 2%, and climbs as the syscall rate increases. At my employer (Netflix), high rates are unusual in cloud, with some exceptions (databases).
  • Context switches: these add overheads similar to the syscall rate, and I think the context switch rate can simply be added to the syscall rate for the following estimations.
  • Page fault rate: adds a little more overhead as well, for high rates.
  • Working set size (hot data): more than 10 Mbytes will cost additional overhead due to TLB flushing. This can turn a 1% overhead (syscall cycles alone) into a 7% overhead. This overhead can be reduced by A) pcid, available in Linux 4.14, and B) Huge pages.
  • Cache access pattern: the overheads are exacerbated by certain access patterns that switch from caching well to caching a little less well. Worst case, this can add an additional 10% overhead, taking (say) the 7% overhead to 17%.

重點在於給了量測的方式,以第一個 Syscall rate 來說好了,他用 sudo perf stat -e raw_syscalls:sys_enter -a -I 1000 測試而得到程式的 syscall 數量,然後得到下面的表格,其中 X 軸是每秒千次呼叫數,Y 軸是效能損失:

用這樣的方式提供給整個組織 (i.e. Netflix) 內評估衝擊。

Amazon Aurora (MySQL) 提供 Parallel Query 讓人申請使用

AWS 宣佈了 Amazon Aurora (MySQL) 支援 Parallel Query:「Amazon Aurora Parallel Query is Available for Preview」。

這邊提到的 Parallel Query 比較像是 Amazon Athena,直接把單一 Query 打散到多台機器上跑:

Amazon Aurora Parallel Query improves the performance of large analytic queries by pushing processing down to the Aurora storage layer, spreading processing across hundreds of nodes.

也就是說,這算是單一 SQL Query 平行運算的進階版本。

在這之前,AWS 都已經支援單一 Query 在單台機器上利用多 CPU 平行運算。其中 PostgreSQL 是 9.6+ 本身就有支援。Amazon Aurora (MySQL) 則是在 2016 時透過 Parallel Read Ahead 支援某些情境下的的單一 Query 多 CPU 運算了 (發現之前沒寫到...):「Amazon Aurora Update – Parallel Read Ahead, Faster Indexing, NUMA Awareness」。

這個功能目前是 Preview 階段,然後開在這些地區讓大家測試使用:

The preview is available for the MySQL-compatible edition of Amazon Aurora, and is currently available in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions. Sign up to get access.

這個功能提供了想要提昇效能,但懶得改架構的人可以用錢直接硬換出來...

Archives