Home » Posts tagged "ssd"

AWS 提昇了 Amazon EBS 能提供的效能上限

AWS 宣佈 Amazon EBS 可以提供的效能往上提高了 (這邊講的是 Provisioned IOPS SSD,代號 io1):「Amazon EBS Improves Performance for io1 Volumes」。

單一 volume 的 IOPS 從 20K 變成 32K,thoughput 從 320MB/sec 變成 500MB/sec:

Today we are announcing an improvement in performance of Provisioned IOPS SSD (io1) Volumes from 20,000 IOPS to 32,000 IOPS and from 320 MB/s to 500 MB/s of throughput per volume.

應該是科技的進步帶動的 XD

Amazon RDS 支援更大的硬碟空間與更多的 IOPS

Amazon RDS 的升級:「Amazon RDS Now Supports Database Storage Size up to 16TB and Faster Scaling for MySQL, MariaDB, Oracle, and PostgreSQL Engines」。

空間上限從 6TB 變成 16TB,而且可以無痛升。另外 IOPS 上限從 30K 變成 40K:

Starting today, you can create Amazon RDS database instances for MySQL, MariaDB, Oracle, and PostgreSQL database engines with up to 16TB of storage. Existing database instances can also be scaled up to 16TB storage without any downtime.

The new storage limit is an increase from 6TB and is supported for Provisioned IOPS and General Purpose SSD storage types. You can also provision up to 40,000 IOPS for Provisioned IOPS storage volumes, an increase from 30,000 IOPS.

不過隔壁的 Amazon Aurora 還是大很多啊 (64TB),而且實際上不用管劃多大,他會自己長大:

Q: What are the minimum and maximum storage limits of an Amazon Aurora database?

The minimum storage is 10GB. Based on your database usage, your Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance. There is no need to provision storage in advance.

Linode 要推出「Linode Block Storage」了...

從「Linode Block Storage (Fremont beta)」這邊可以看到 Linode 推出 Block Storage 了,是 SSD-based,跟 Amazon EBS 的 gp2 也是 SSD-based 相同。

計價方式,價錢也相同,沒有 I/O fee:

They're affordable - $0.10 per GB (free during the beta) and no usage fees.

目前能從 1GB 開到 1TB:

How big of a Volume can I create?
Between 1 GB and 1024 GB for now. After the beta, the max volume size may be larger.

單台可以掛 8 個:

How many Volumes can I attach to a Linode at the same time?
Up to 8.

然後 2018 開始收費:

The beta is free through 2017. January 1, 2018 the meter starts running.

有了 Block Storage 後有些事情就比較好搭出來了,也不會受限於 local disk 的空間大小。

Linux 下 RAID1 的 SSD 會有讀取不平均問題

在「Unbalanced reads from SSDs in software RAID mirrors in Linux」這邊看到作者看 S.M.A.R.T. 數據時發現兩顆 SSD 硬碟組成的 RAID1 有很明顯的讀取不平均的問題:

242 Total_LBAs_Read [...] 16838224623
242 Total_LBAs_Read [...] 1698394290

原因是因為 Linux 對 RAID1 的 SSD 有不一樣的演算法:

The current state of RAID1 read balancing is kind of complex, but the important thing here in all kernels since 2012 is that if you have SSDs and at least one disk is idle, the first idle disk will be chosen.

2016 時演算法就更激進了,變成非 SSD 會:

In kernels with the late 2016 change, this widens to if at least one disk is idle, the first idle disk will be chosen, even if all mirrors are HDs.

加上 SSD 很快,這造成 loading 幾乎都在第一顆上... 這對 SSD 應該是還好啦 (理論上 SSD 的讀取不傷壽命),不過還是有點怪就是了。

InnoDB redo log 大小對效能的影響

在「Benchmark(et)ing with InnoDB redo log size」這邊看到在討論 InnoDB redo log 的大小對效能的影響 (也就是 innodb_log_file_sizeinnodb_log_files_in_group)。

開頭就有先提到重點,在新版 MySQL 裡,幾乎所有的情況比較大的 redo log 有比較好的效能 (平均值):

tl;dr - conclusions specific to my test

  1. A larger redo log improves throughput
  2. A larger redo log helps more with slower storage than with faster storage because page writeback is more of a bottleneck with slower storage and a larger redo log reduces writeback.
  3. A larger redo log can help more when the working set is cached because there are no stalls from storage reads and storage writes are more likely to be a bottleneck.
  4. InnoDB in MySQL 5.7.17 is much faster than 5.6.35 in all cases except IO-bound + fast SSD

可以看出來平均效能的提昇很顯著,不管是增加 redo log 大小還是升級到 5.7:

但作者也遇到了奇怪的效能問題。雖然平均效能提昇得很顯著,但隨著加入資料的增加,效能的 degradation 其實很嚴重,在原來的網頁上可以看到這些資訊。

The results above show average throughput and that hides a lot of interesting behavior. We expect throughput over time to not suffer from variance -- for both InnoDB and for MyRocks. For many of the results below there is a lot of variance (jitter).

所以也許現階段先加大就好 (至少寫入的效能會提昇),不需要把這個特性當作升級 MySQL 的理由。

Amazon EC2 推出 I3 系列機器

Amazon EC2 推出使用 NVMe SSD 的機器,I3 系列:「Now Available – I3 Instances for Demanding, I/O Intensive Applications」。

以東京區的價錢來看,r4.16xlarge 與 i3.16xlarge 都是 64 vCPU 與 488GB RAM。不一樣的地方只有兩個:

  • 第一個是 r4 只有 195 vCPU,而 i3 有 200 vCPU,快了一些。
  • 第二個是 i3 多了 8 個 1900 NVMe SSD。

但價錢卻只差一些 ($5.12/hr 與 $5.856/hr),如果速度可以善用 SSD 的話,跟 r4.* 比起來其實頗超值的...

Swap 對 InnoDB 的影響

Percona 的老大拿 5.7 版做實驗,確認 swap 對 InnoDB 的影響:「The Impact of Swapping on MySQL Performance」。

測試的機器是 32GB RAM,作業系統 (以及 swap) 裝在已經有點年紀的 Intel 520 SSD 上,而 MySQL 則是裝在 Intel 750 NVMe 上。透過對 innodb_buffer_pool 的調整來看情況。

可以看到設為 24GB (記憶體 75% 的量) 時很穩定的在 44K QPS 與 3.5ms (95%):

This gives us about 44K QPS. The 95% query response time (reported by sysbench) is about 3.5ms.

而當設成 32GB 的時候開始可以觀察到 swap i/o,掉到 20K QPS 與 9ms (95%):

We can see that performance stabilizes after a bit at around 20K QPS, with some 380MB/sec disk IO and 125MB/sec swap IO. The 95% query response time has grown to around 9ms.

當拉到 48GB 的時候就更掉更多,6K QPS 與 35ms (95%):

Now we have around 6K QPS. Disk IO has dropped to 250MB/sec, and swap IO is up to 190MB/sec. The 95% query response time is around 35ms.

作者發現掉的比率沒有想像中大:

When I started, I expected severe performance drop even with very minor swapping. I surprised myself by getting swap activity to more than 100MB/sec, with performance “only” halved.

這邊測試用的是 SSD,如果是傳統用磁頭的硬碟,對 random access 應該會很敏感而掉更多:

This assumes your swap space is on an SSD, of course! SSDs handle random IO (which is what paging activity usually is) much better than HDDs.

基本上還是要避免碰到 swap 啦,另外 comment 的地方剛好有提到前陣子在猜測的 best practice,測試時的 vm.swappiness 是設成 1,這應該是作者的 best practice:

Swappiness was set to 1 in this case. I was not expecting this to cause significant impact as swapping is caused by genuine (intended) missconfiguration with more memory required than available.

InnoDB 的 buffer pool preload 功能

Percona 的人討論了 InnoDB 提供的 buffer pool preload 功能:「Using the InnoDB Buffer Pool Pre-Load Feature in MySQL 5.7」。

就如同他所講的,因為硬體設備的進步 (主要是 SSD 的興起),而導致 preload 的需求已經沒以前重要了:

Frankly, time has reduced the need for this feature. Five years ago, we would typically store databases on spinning disks. These disks often took quite a long time to warm up with normal database workloads, which could lead to many hours of poor performance after a restart. With the rise of SSDs, warm up happens faster and reduces the penalty from not having data in the buffer pool.

由於 SSD 的 random read 很快,反而可以直接推上線讓他邊跑服務邊 warm up。不過相對的,傳統硬碟的 InnoDB database 還是可以規劃需求,畢竟 random read 還是痛點...

AWS 的 General Purpose SSD (gp2) 可以看到 burst I/O 的 credit 數字了

AWS 宣佈把 gp2 的 I/O burst credit 數字給量化了:「New – Burst Balance Metric for EC2’s General Purpose SSD (gp2) Volumes」。

gp2 最小的也可以衝到 3000 IOPS,另外可以累積 5.4M credits:

Each volume can accumulate up to 5.4 million credits, and they can be spent at up to 3,000 per second per volume.

算了一下,1GB 的空間一個小時可以累積 10,800 IOPS,如果切 10GB 的系統碟,大約 50 個小時就會滿。如果 100GB 的話就是 5 個小時了,其實對於真的超級大量持續 I/O 的應用還是要考慮用 Provisioned IOPS SSD (io1)。

不過明顯的好處是可以建立 alarm,當機器的 burst I/O credit 快用完的時候可以叫一叫,這樣讓人可以評估下一步:

另外也可以藉由這個數字來評估是要加大空間以換取 IOPS,或是換到有保障的 Provisioned IOPS SSD (io1)。

Firefox 的大量寫入對 SSD 的影響

在「Firefox is eating your SSD - here is how to fix it」這邊講到 Firefox 寫入對 SSD 的影響,先引用文章裡的解法:

After some digging, I found out that this behavior is controlled by a parameter that you can access through typing “about:config” in the address bar. This parameter is called: —browser.sessionstore.interval

預設是 15 秒一次,作者改成 30 分鐘一次,因此下降了大約 5 倍 (應該可以解讀成 1/6?):

It is set to 15 seconds by default. In my case, I reset it to a more sane (at least for me) 30 minutes. Since then, I’m only seeing about 2GB written to disk when my workstation is left idle, which still feels like a lot but is 5 times less than before.

據文章後面的 update 說明,Google Chrome 也有類似的情況,不過暫時沒給解法...

Archives