InnoDB 與 MyRocks 之間的取捨

MyRocks 的主要作者 Mark Callaghan 整理了一篇關於大台機器下,資料可以放到記憶體內的效能比較:「In-memory sysbench, a larger server and contention - part 1」。

這其實才是一般會遇到的情況:當事業夠大時,直接花錢買 1TB RAM + 數片 PCI-E SSD 的機器用錢換效能... (主要應該會在記憶體花不少錢,剛剛查了一下,現在白牌的 server 一台大約七十萬就可以擺平?兩台做 HA 也才一百四十萬,對有這個規模的單位來說通常不是大問題...)

而三種不同的 case 裡面,最後這個應該是最接近真實情況的:

可以看到 InnoDB 在幾乎所有項目都還是超越 MyRocks (只有 random-points 與 insert-only 輸)。

不知道後續的開發能量還會有多少... (因為 Facebook 的用法跟一般情況不一樣)

Reddit 在處理 Page View 的方式

Reddit 說明了他們如何處理 pageview:「View Counting at Reddit」。

以 Reddit 的規模有提到兩個重點,第一個在善用 RedisHyperLogLog 這個資料結構,當量大的時候其實可以允許有微小的誤差:

The amount of memory varies per implementation, but in the case of this implementation, we could count over 1 million IDs using just 12 kilobytes of space, which would be 0.15% of the original space usage!

維基百科上有說明當資料量在 109 這個等級時,用 1.5KB 的記憶體只有 2% 的誤差值:

The HyperLogLog algorithm is able to estimate cardinalities of > 109 with a typical error rate of 2%, using 1.5 kB of memory.

第二個則是寫入允許短時間的誤差 (pageview 不會即時反應),透過批次處理降低對 Cassandra cluster 的負荷:

Writes to Cassandra are batched in 10-second groups per post in order to avoid overloading the cluster.

可以注意到把 Redis 當作 cache 層而非 storage 層。

主要原因應該跟 Redis 定位是 data structure server 而非 data structure storage 有關 (可以從對 Durability 的作法看出來),而使用 Cassandra 存 key-value 非常容易 scale,但讀取很慢。剛好兩個相輔相成。

Amazon EC2 要推出 x1e.32xlarge,4TB RAM 的機器

剛剛 Amazon EC2 公佈的消息,要再推出記憶體大怪物機器 x1e.32xlarge:「EC2 In-Memory Processing Update: Instances with 4 to 16 TB of Memory + Scale-Out SAP HANA to 34 TB」。

Later this year we plan to make the x1e.32xlarge instances available in several AWS regions, in both On-Demand and Reserved Instance form. These instances will offer 4 TB of DDR4 memory (twice as much as the x1.32xlarge), 128 vCPUs (four 2.3 GHz Intel® Xeon® E7 8880 v3 processors), high memory bandwidth, and large L3 caches.

雖然都是拿 SAP 來舉例,但這也是當系統碰到瓶頸時搶時間的方法 (也就是當 happy problem 發生時的 workaround)。

不知道這種機器實際買起來多少錢 XDDD

InnoDB redo log 大小對效能的影響

在「Benchmark(et)ing with InnoDB redo log size」這邊看到在討論 InnoDB redo log 的大小對效能的影響 (也就是 innodb_log_file_sizeinnodb_log_files_in_group)。

開頭就有先提到重點,在新版 MySQL 裡,幾乎所有的情況比較大的 redo log 有比較好的效能 (平均值):

tl;dr - conclusions specific to my test

  1. A larger redo log improves throughput
  2. A larger redo log helps more with slower storage than with faster storage because page writeback is more of a bottleneck with slower storage and a larger redo log reduces writeback.
  3. A larger redo log can help more when the working set is cached because there are no stalls from storage reads and storage writes are more likely to be a bottleneck.
  4. InnoDB in MySQL 5.7.17 is much faster than 5.6.35 in all cases except IO-bound + fast SSD

可以看出來平均效能的提昇很顯著,不管是增加 redo log 大小還是升級到 5.7:

但作者也遇到了奇怪的效能問題。雖然平均效能提昇得很顯著,但隨著加入資料的增加,效能的 degradation 其實很嚴重,在原來的網頁上可以看到這些資訊。

The results above show average throughput and that hides a lot of interesting behavior. We expect throughput over time to not suffer from variance -- for both InnoDB and for MyRocks. For many of the results below there is a lot of variance (jitter).

所以也許現階段先加大就好 (至少寫入的效能會提昇),不需要把這個特性當作升級 MySQL 的理由。

Amazon DynamoDB Accelerator (DAX)

DynamoDB 推出的新架構,在系統上幫忙處理 cache:「Amazon DynamoDB Accelerator (DAX) – In-Memory Caching for Read-Intensive Workloads」。

DAX 跟現有的 DynamoDB API 相容:

DAX is a fully managed caching service that sits (logically) in front of your DynamoDB tables. It operates in write-through mode, and is API-compatible with DynamoDB.

因為 cache 的緣故,會是 eventually-consistent 架構:

Responses are returned from the cache in microseconds, making DAX a great fit for eventually-consistent read-intensive workloads.

然後是 r3 系列的機器組成的,限制在十台 (冒出大大的問號):

Each DAX cluster can contain 1 to 10 nodes; you can add nodes in order to increase overall read throughput. The cache size (also known as the working set) is based on the node size (dax.r3.large to dax.r3.8xlarge) that you choose when you create the cluster. Clusters run within a VPC, with nodes spread across Availability Zones.

不是很清楚這樣的好處 (比起自己用 memcached 或是其他類似的 cache 架構),也許過幾天想通了會開竅... :o

Cloudbleed:Cloudflare 這次的安全問題

Cloudflare 把完整的時間軸與影響範圍都列出來了:「Incident report on memory leak caused by Cloudflare parser bug」。

出自於 2/18 時 GoogleTavis Ormandy 直接在 Twitter 上找 Cloudflare 的人:

Google 的 Project Zero 上的資料:「cloudflare: Cloudflare Reverse Proxies are Dumping Uninitialized Memory」。

起因在於 bug 造成有時候會送出不應該送的東西,可能包含了敏感資料:

It turned out that in some unusual circumstances, which I’ll detail below, our edge servers were running past the end of a buffer and returning memory that contained private information such as HTTP cookies, authentication tokens, HTTP POST bodies, and other sensitive data.

不過這邊不包括 SSL 的 key,主要是因為隔離開了:

For the avoidance of doubt, Cloudflare customer SSL private keys were not leaked. Cloudflare has always terminated SSL connections through an isolated instance of NGINX that was not affected by this bug.

不過由於這些敏感資料甚至還被 Google 收進 search engine,算是相當的嚴重,所以不只是 Cloudflare 得修好這個問題,還得跟眾多的 search engine 合作將這些資料移除:

Because of the seriousness of such a bug, a cross-functional team from software engineering, infosec and operations formed in San Francisco and London to fully understand the underlying cause, to understand the effect of the memory leakage, and to work with Google and other search engines to remove any cached HTTP responses.

bug 影響的時間從 2016/09/22 開始:

2016-09-22 Automatic HTTP Rewrites enabled
2017-01-30 Server-Side Excludes migrated to new parser
2017-02-13 Email Obfuscation partially migrated to new parser
2017-02-18 Google reports problem to Cloudflare and leak is stopped

而以 2/13 到 2/18 的流量反推估算,大約是 0.00003% 的 request 會可能產生這樣的問題:

The greatest period of impact was from February 13 and February 18 with around 1 in every 3,300,000 HTTP requests through Cloudflare potentially resulting in memory leakage (that’s about 0.00003% of requests).

不過不得不說 Tavis Ormandy 真的很硬,在沒有 source code 以及 Cloudflare 幫助的情況下直接打出可重製的步驟:

I worked with cloudflare over the weekend to help clean up where I could. I've verified that the original reproduction steps I sent cloudflare no longer work.

事發後完整的時間軸:

2017-02-18 0011 Tweet from Tavis Ormandy asking for Cloudflare contact information
2017-02-18 0032 Cloudflare receives details of bug from Google
2017-02-18 0040 Cross functional team assembles in San Francisco
2017-02-18 0119 Email Obfuscation disabled worldwide
2017-02-18 0122 London team joins
2017-02-18 0424 Automatic HTTPS Rewrites disabled worldwide
2017-02-18 0722 Patch implementing kill switch for cf-html parser deployed worldwide
2017-02-20 2159 SAFE_CHAR fix deployed globally
2017-02-21 1803 Automatic HTTPS Rewrites, Server-Side Excludes and Email Obfuscation re-enabled worldwide

另外在「List of Sites possibly affected by Cloudflare's #Cloudbleed HTTPS Traffic Leak」這邊有人整理出受影響的大站台有哪些 (小站台就沒列上去了)。

Amazon EC2 推出 I3 系列機器

Amazon EC2 推出使用 NVMe SSD 的機器,I3 系列:「Now Available – I3 Instances for Demanding, I/O Intensive Applications」。

以東京區的價錢來看,r4.16xlarge 與 i3.16xlarge 都是 64 vCPU 與 488GB RAM。不一樣的地方只有兩個:

  • 第一個是 r4 只有 195 vCPU,而 i3 有 200 vCPU,快了一些。
  • 第二個是 i3 多了 8 個 1900 NVMe SSD。

但價錢卻只差一些 ($5.12/hr 與 $5.856/hr),如果速度可以善用 SSD 的話,跟 r4.* 比起來其實頗超值的...

Linode 推出 $5/month 方案,DigitalOcean 推出 load balancer

LinodeDigitalOcean 這兩家有名的 VPS 都推出新的功能:「High-Memory Instances and $5 Linodes」、「Load Balancers: Simplifying High Availability」。

Linode 另外將 $10/month 方案的硬碟空間加大:

And finally, the existing Linode 2GB ($10/mo) plan is receiving a free storage upgrade from 24GiB to 30GiB.

本來對外的速度限制在 125Mbps max 拉到 1000Mbps:(本來的資料可以參考之前的頁面)

And finally finally, we’ve also increased the outbound network speed limit on all plans to be at minimum 1000 Mbits. Existing Linodes will need to reboot to pick up the new value, that’s it!

EC2 的 r4 系列機器開出來了...

Amazon EC2 的 r4.* 總算是開出來了:「Amazon EC2 R4 instances are now available in new regions」。

Amazon EC2 R4 instances are now available in the following regions: Asia Pacific (Tokyo), Asia Pacific (Singapore), South America (São Paulo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Canada (Central), and EU (London).

r4.16xlarge (488GB) 算是補上中間的 r3.8xlarge (244GB) 與 x1.16xlarge (976GB) 中間的一塊洞了,不然之前得開 p2.8xlarge (488GB),但也不是每一區都有,而且用不到 GPU 就浪費了...