Amazon Aurora 的 Serverless 與 Multi-master

Amazon Aurora 推出了兩包玩意,第一包是 Serverless,讓需要人介入的情況更少:「In The Works – Amazon Aurora Serverless」。

在 Serverless 的第一個重點是支援以秒計費:

Today we are launching a preview (sign up now) of Amazon Aurora Serverless. Designed for workloads that are highly variable and subject to rapid change, this new configuration allows you to pay for the database resources you use, on a second-by-second basis.

然後是極為快速的 auto-scaling:

The endpoint is a simple proxy that routes your queries to a rapidly scaled fleet of database resources. This allows your connections to remain intact even as scaling operations take place behind the scenes. Scaling is rapid, with new resources coming online within 5 seconds

這兩個組合起來,讓使用端可以除了在 Amazon EC2 上可以快速 scale 外,後端的資料庫也能 scale 了...

第二個是 Multi-master 架構:「Sign Up for the Preview of Amazon Aurora Multi-Master」。

Amazon Aurora Multi-Master allows you to create multiple read/write master instances across multiple Availability Zones. This enables applications to read and write data to multiple database instances in a cluster, just as you can read across Read Replicas today.

(話說我一直都誤以為 Aurora 是 R/W master...)

Anyway,這個功能不知道怎麼疊上去的... 不笑得會不會有嚴重的 distributed lock issue,反而推薦大家平常都寫到同一台 (像是 PXC 就會這樣)。

Amazon Aurora (MySQL) 推出的 Asynchronous Key Prefetch

Amazon Aurora (MySQL) 推出新的效能改善,可以改善 JOIN 時的效能:「Amazon Aurora (MySQL) Speeds Join Queries by More than 10x with Asynchronous Key Prefetch」。

看起來像是某個情況的 optimization,將可能的 random access 換成 sequential access 而得到大量的效能:

This feature applies to queries that require use of the Batched Key Access (BKA) join algorithm and Multi-Range Read (MRR) optimization, and improves performance when the underlying data set is not in the main memory buffer pool or query cache.

其實記憶體還是最好用的加速器,能加大硬拼就先硬拼... XD

Linux 下 RAID1 的 SSD 會有讀取不平均問題

在「Unbalanced reads from SSDs in software RAID mirrors in Linux」這邊看到作者看 S.M.A.R.T. 數據時發現兩顆 SSD 硬碟組成的 RAID1 有很明顯的讀取不平均的問題:

242 Total_LBAs_Read [...] 16838224623
242 Total_LBAs_Read [...] 1698394290

原因是因為 Linux 對 RAID1 的 SSD 有不一樣的演算法:

The current state of RAID1 read balancing is kind of complex, but the important thing here in all kernels since 2012 is that if you have SSDs and at least one disk is idle, the first idle disk will be chosen.

2016 時演算法就更激進了,變成非 SSD 會:

In kernels with the late 2016 change, this widens to if at least one disk is idle, the first idle disk will be chosen, even if all mirrors are HDs.

加上 SSD 很快,這造成 loading 幾乎都在第一顆上... 這對 SSD 應該是還好啦 (理論上 SSD 的讀取不傷壽命),不過還是有點怪就是了。

MySQL 上不同 Isolation Level 對效能的影響

目前看到的結論都是:MySQL (InnoDB) 上因為高度對 RR (REPEATABLE-READ) 最佳化,使得 RR 的效能反而比 RC (READ-COMMITTED) 以及 RU (READ-UNCOMMITTED) 都好。

不清楚 RR/RC/RU 差異的可以參考維基百科上「Isolation (database systems)」的解釋...

從 2010 年在測 5.0 的「Repeatable read versus read committed for InnoDB」到 2015 年測 5.7 的「MySQL Performance : Impact of InnoDB Transaction Isolation Modes in MySQL 5.7」都測出 RR 的效能比 RC/RU 好... 三段分別是 RR/RC/RU:

所以在 MySQL 上沒有使用 RC/RU 的必要... (抱頭)

Amazon DynamoDB Accelerator (DAX)

DynamoDB 推出的新架構,在系統上幫忙處理 cache:「Amazon DynamoDB Accelerator (DAX) – In-Memory Caching for Read-Intensive Workloads」。

DAX 跟現有的 DynamoDB API 相容:

DAX is a fully managed caching service that sits (logically) in front of your DynamoDB tables. It operates in write-through mode, and is API-compatible with DynamoDB.

因為 cache 的緣故,會是 eventually-consistent 架構:

Responses are returned from the cache in microseconds, making DAX a great fit for eventually-consistent read-intensive workloads.

然後是 r3 系列的機器組成的,限制在十台 (冒出大大的問號):

Each DAX cluster can contain 1 to 10 nodes; you can add nodes in order to increase overall read throughput. The cache size (also known as the working set) is based on the node size (dax.r3.large to dax.r3.8xlarge) that you choose when you create the cluster. Clusters run within a VPC, with nodes spread across Availability Zones.

不是很清楚這樣的好處 (比起自己用 memcached 或是其他類似的 cache 架構),也許過幾天想通了會開竅... :o

InnoDB 的 buffer pool preload 功能

Percona 的人討論了 InnoDB 提供的 buffer pool preload 功能:「Using the InnoDB Buffer Pool Pre-Load Feature in MySQL 5.7」。

就如同他所講的,因為硬體設備的進步 (主要是 SSD 的興起),而導致 preload 的需求已經沒以前重要了:

Frankly, time has reduced the need for this feature. Five years ago, we would typically store databases on spinning disks. These disks often took quite a long time to warm up with normal database workloads, which could lead to many hours of poor performance after a restart. With the rise of SSDs, warm up happens faster and reduces the penalty from not having data in the buffer pool.

由於 SSD 的 random read 很快,反而可以直接推上線讓他邊跑服務邊 warm up。不過相對的,傳統硬碟的 InnoDB database 還是可以規劃需求,畢竟 random read 還是痛點...

InnoDB 的 Isolation Level 以及 Performance Schema 對效能的影響

雖然 Mark Callaghan 現在的主力都在 MyRocks 上,但他還是對 InnoDB 上的效能頗關注 (畢竟是個成熟而且競爭的產品)。而這篇「Sysbench, InnoDB, transaction isolation and the performance schema」講到 MySQL 5.6.26 裡的 InnoDB,了解 isolation level 與 performance schema 對效能的差異。結果可以在這邊翻到。

關掉 performance schema 會讓效能變好是預期的,不過看起來比預期小很多。另外某些情況下 RR (REPEATABLE-READ) 的效能會比 RC (READ-COMMITTED) 好倒是頗意外,這邊也有給出原因:

Using repeatable-read boosts performance because it reduces the mutex contention from getting a consistent read snapshot as that is done once per transaction rather than once per statement.

不過看了看數據,純粹讀取的部份 RC 會在某些地方快一些,不過整體來說在 MySQL 5.6.26 上的 RR 與 RC 差異真的不算太明顯了...

Amazon Aurora 支援 Reader Endpoint

Amazon Aurora 支援 Reader Endpoint,讓讀的部份可以打散掉:「New Reader Endpoint for Amazon Aurora – Load Balancing & Higher Availability」。

讀的部份比較容易 scale (常見的方式是透過 replication 做到),而現在很多 database framework (包括各類的 ORM framework) 都支援讀寫分離,這個支援對於系統的 scale 來說幫忙頗大。

不過不知道會不會有 replication lag 的問題,我猜是會有...

Amazon Aurora 支援跨區域 Read Replica

傳統的作法 (自己擁有機房時) 就是在兩個機房中間建立 site-to-site VPN (通常是 IPsec),然後再用 MySQL 的 replication 接起來,而現在 Amazon Aurora 直接在界面上支援這件事情,可以在網頁上點一點開起來:「New – Cross-Region Read Replicas for Amazon Aurora」。

不過好像沒講到跨區的 VPC 對接設定問題?所以是透過他們自己的 network 接起來?這樣的話流量費用要怎麼算?