Home » Posts tagged "write"

Amazon Aurora 的 Serverless 與 Multi-master

Amazon Aurora 推出了兩包玩意,第一包是 Serverless,讓需要人介入的情況更少:「In The Works – Amazon Aurora Serverless」。

在 Serverless 的第一個重點是支援以秒計費:

Today we are launching a preview (sign up now) of Amazon Aurora Serverless. Designed for workloads that are highly variable and subject to rapid change, this new configuration allows you to pay for the database resources you use, on a second-by-second basis.

然後是極為快速的 auto-scaling:

The endpoint is a simple proxy that routes your queries to a rapidly scaled fleet of database resources. This allows your connections to remain intact even as scaling operations take place behind the scenes. Scaling is rapid, with new resources coming online within 5 seconds

這兩個組合起來,讓使用端可以除了在 Amazon EC2 上可以快速 scale 外,後端的資料庫也能 scale 了...

第二個是 Multi-master 架構:「Sign Up for the Preview of Amazon Aurora Multi-Master」。

Amazon Aurora Multi-Master allows you to create multiple read/write master instances across multiple Availability Zones. This enables applications to read and write data to multiple database instances in a cluster, just as you can read across Read Replicas today.

(話說我一直都誤以為 Aurora 是 R/W master...)

Anyway,這個功能不知道怎麼疊上去的... 不笑得會不會有嚴重的 distributed lock issue,反而推薦大家平常都寫到同一台 (像是 PXC 就會這樣)。

GitHub 重新定位 Redis 的功能...

GitHub Engineering 說明了他們為什麼改變 Redis 的使用情境:「Moving persistent data out of Redis」。

GitHub 裡面,Redis 有兩種不同的情境,一種叫做 transient Redis,只用做 cache:

We used it as an LRU cache to conveniently store the results of expensive computations over data originally persisted in Git repositories or MySQL. We call this transient Redis.

另外一種則是打開 persistence 功能,叫做 persistent Redis:

We also enabled persistence, which gave us durability guarantees over data that was not stored anywhere else. We used it to store a wide range of values: from sparse data with high read/write ratios, like configuration settings, counters, or quality metrics, to very dynamic information powering core features like spam analysis. We call this persistent Redis.

這邊講的是 persistent Redis 被換成用 MySQL (InnoDB) 儲存:

Recently we made the decision to disable persistence in Redis and stop using it as a source of truth for our data. The main motivations behind this choice were to:

  • Reduce the operational cost of our persistence infrastructure by removing some of its complexity.
  • Take advantage of our expertise operating MySQL.
  • Gain some extra performance, by eliminating the I/O latency during the process of writing big changes on the server state to disk.

For the majority of callsites, we replaced persistent Redis with GitHub::KV, a MySQL key/value store of our own built atop InnoDB, with features like key expiration. We were able to use GitHub::KV almost identically as we used Redis: from trending repositories and users for the explore page, to rate limiting to spammy user detection.

後面講了不少轉換的過程 (還包含了某些功能的改寫),但沒有講的太清楚為什麼不繼續使用 Redis。

目前只能就提到的三點問題來看,persistent 的 i/o 成本可能太高?而且難以再壓榨效能出來?而相反的,InnoDB 已經花了很多力氣在上面,直接拿來用反而可以解決問題?

不過看得出來這個轉換還是花了不少力氣,看得出來有些 application 使用 Redis 的模式不能直接搬到 InnoDB 上,花了時間改寫...

Percona 宣佈支援 MyRocks (MySQL 上的 RocksDB engine)

RocksDBFacebookGoogle 放出的 LevelDB 改出來,然後被更多人接受並且投注資源的 library... (看兩邊的 GitHub 應該就會有感覺了)

而 Facebook 的人在改進後又花了不少力氣 porting 到 MySQL 上...

之前 Twitter 上就有看到不少消息,這次算是在 Percona 官方的 blog 上正式公佈要支援 MyRocks 的消息:「Announcing MyRocks in Percona Server for MySQL」。

依照目前的計畫次在明年 2017 的 Q1 放出 experimental build,依照 Percona 的品質慣例,應該是可以拿來在測試環境下跑的順順的 (在還沒有 heavy loading 的前提下):

We will provide the experimental builds of MyRocks in Percona Server in Q1 2017, and we encourage you to start testing and experimenting so we can quickly release a solid GA version.

文章下面的 comment 剛好有人提到 Percona 另外一個產品線 TokuDB,這兩個產品線重複的問題:

MyRocks seems pretty similar to TokuDB. They are both write-optimized. MyRocks uses LSM tree while TokuDB uses fractal tree.

How do the 2 compare? Which one would you recommend using?

之前被 Percona 買下的 TokuDB 跟 Facebook 所發展出來的 MyRocks 的產品重複性頗高 (都是為了寫入的部分最佳化)。應該還是因為 fractal treeLSM tree 成熟度造成的效能差異還是太明顯吧 (當然另外也跟後面公司投入的資源有關),讓 Percona 決定還是要支援 MyRocks,而不是全力推動自家買下的 TokuDB... (唔,變成阿斗了?)

不知道成熟後有沒有機會變成 InnoDB replacement...

Firefox 的大量寫入對 SSD 的影響

在「Firefox is eating your SSD - here is how to fix it」這邊講到 Firefox 寫入對 SSD 的影響,先引用文章裡的解法:

After some digging, I found out that this behavior is controlled by a parameter that you can access through typing “about:config” in the address bar. This parameter is called: —browser.sessionstore.interval

預設是 15 秒一次,作者改成 30 分鐘一次,因此下降了大約 5 倍 (應該可以解讀成 1/6?):

It is set to 15 seconds by default. In my case, I reset it to a more sane (at least for me) 30 minutes. Since then, I’m only seeing about 2GB written to disk when my workstation is left idle, which still feels like a lot but is 5 times less than before.

據文章後面的 update 說明,Google Chrome 也有類似的情況,不過暫時沒給解法...

Netflix 對 sendfile() 在 TLS 情況下的加速

Netflix 對於寫了一篇關於隱私保護的技術細節:「Protecting Netflix Viewing Privacy at Scale」。

其中講到 2012 年的 Netflix Open Connect 中的 Open Connect Appliance (OCA,放伺服器到 ISP 機房的計畫) 只有單台伺服器 8Gbps,到現在 2016 可以達到 90Gbps:

As we mentioned in a recent company blog post, since the beginning of the Open Connect program we have significantly increased the efficiency of our OCAs - from delivering 8 Gbps of throughput from a single server in 2012 to over 90 Gbps from a single server in 2016.

早期的 Netflix 走 sendfile() 將影片丟出去,這在 kernel space 處理,所以很有效率:

當影片本身改走 HTTPS (TLS) 時,其中一個遇到的效能問題是導致 sendfile() 無法使用,而必須在 userland space 加密後改走回傳統的 write() 架構,這對於效能影響很大:

所以他們就讓 kernel 支援 AES 系列加密 (包括 AES-GCM 與 AES-CBC),效能的提昇大約是 30%:

Our changes in both the BoringSSL and ISA-L test situations significantly increased both CPU utilization and bandwidth over baseline - increasing performance by up to 30%, depending on the OCA hardware version.

文章開頭也有提到選 AES-GCM 與 AES-CBC 的一些來龍去脈,主要是 AES-GCM 的安全強度比較好,另外考慮到舊的 client 不支援 AES-GCM 時會使用 AES-CBC:

We evaluated available and applicable ciphers and decided to primarily use the Advanced Encryption Standard (AES) cipher in Galois/Counter Mode (GCM), available starting in TLS 1.2. We chose AES-CGM over the Cipher Block Chaining (CBC) method, which comes at a higher computational cost. The AES-GCM cipher algorithm encrypts and authenticates the message simultaneously - as opposed to AES-CBC, which requires an additional pass over the data to generate keyed-hash message authentication code (HMAC). CBC can still be used as a fallback for clients that cannot support the preferred method.

另外 OCA 機器本身也都夠新,支援 AES-NI 指令集,效能上不是太大的問題:

All revisions of Open Connect Appliances also have Intel CPUs that support AES-NI, the extension to the x86 instruction set designed to improve encryption and decryption performance. We needed to determine the best implementation of AES-GCM with the AES-NI instruction set, so we investigated alternatives to OpenSSL, including BoringSSL and the Intel Intelligent Storage Acceleration Library (ISA-L).

不過在「Netflix Open Connect Appliance Deployment Guide」(26 July 2016 版) 這份文件裡看起來還是用多條 10Gbps 透過 LACP 接上去:

You must be able to provision 2-4 x 10 Gbps ethernet ports in a LACP LAG per OCA. The exact quantity depends on the OCA type.

可能是下一版準備要上 40Gbps 或 100Gbps 的準備...?

MariaDB 讀寫分離的工具:MaxScale

MariaDBMaxScale 軟體提供 MySQL 相容的 proxy interface,可以將後端一群 MySQL server 架構隱藏起來,讓應用程式不需要處理這部份。

Percona 的人則介紹 MaxScale 作為讀寫分離的工具:「High availability with asynchronous replication… and transparent R/W split」。

如果你是用有支援讀寫分離的 ORM (像是 Laravel 中的 Illuminate::Database),由於 ORM library 幫你處理好了,你可以省掉這個工作。

但在其他的情況,像是應用程式沒有原始程式碼,或是只能設一組 server,你就必須透過像 MaxScale 這種軟體來幫你打散負荷量。

Percona 給的範例提供了很多設定檔,應該是改一改就可以動 (當然效能調校是另外要花功夫的事情了),對於有興趣的人應該可以丟人研究?

PCIe 的 SSD 與 SATA 的比較

LogicMonitor 的人比較了 PCIe SSD 與 SATA SSD,他們在意的重點是 read/write latency 非單純的 throughput:「Device Utilization of PCIe and SATA SSDs」。

文章裡講得很長,把他們找原因的過程寫出來,從 latency 的影響改變到 queue service 的變化:

後來換成 PCIe SSD 後 write latency 從 1.8ms 掉到 0.02ms 左右,大約是兩個零的差距。

另外文章裡也提到了 fio 這個測試工具,找時間來測試看看,熟悉一下...

Percona 對 mysql_query_cache 的測試 (以 Magento 為例)

Percona 的人以現在的觀點來看 mysql_query_cache:「The MySQL query cache: Worst enemy or best friend?」。

起因主要也是懷疑 query cache 是 global mutex 在現在的硬體架構 (主要是 CPU 數量成長) 應該是個負面的影響,但不確定影響多少:

The query cache is well known for its contentions: a global mutex has to be acquired for any read or write operation, which means that any access is serialized. This was not an issue 15 years ago, but with today’s multi-core servers, such serialization is the best way to kill performance.

這邊就有點怪了,PK search 應該是個位數 ms 等級才對 (一般 EC 網站的資料量都應該可以 memory fit),不知道他是怎麼測的:

However from a performance point of view, any query cache hit is served in a few tens of microseconds while the fastest access with InnoDB (primary lookup) still requires several hundreds of microseconds. Yes, the query cache is at least an order of magnitude faster than any query that goes to InnoDB.

anyway,他實際測試兩個不同的 configuration,首先是打開 query cache 的:

再來是關閉 query cache 的:

測試的方式在原文有提到,這邊就不抄過來了。測試的結果可以看到關閉 query cache 得到比較好的 thoughput:

Throughput scales well up to somewhere between 10 and 20 threads (for the record the server I was using had 16 cores). But more importantly, even at the higher concurrencies, the overall throughput continued to increase: at 20 concurrent threads, MySQL was able to serve nearly 3x more queries without the query cache.

跟預期的差不多,硬體的改變使得演算法也必須跟著改,不然就會遇到問題...

Archives