Home » Posts tagged "ssd"

EBS 可以動態擴充 Magnetic Volume 的大小了...

AWS 宣佈可以動態擴充 EBS (Magnetic Volume) 的大小了:「Amazon EBS Extends Elastic Volumes to Support EBS Magnetic (Standard) Volume Type」。EBS (Magnetic Volume) 被歸類到前個世代的產品,意外的把這個功能支援了前個世代的產品...

前世代的 Magnetic Volume 與 Throughput Optimized HDD (st1) & Cold HDD (sc1) 來比較起來貴了一些,但可以當作開機空間,另外低消只有 1GB (而 st1sc1 是 500GB),不過如果與 General Purpose SSD (gp2) 比的話,只要 I/O 有點量,整體的價錢就會超過 gp2,目前看起來還是用 gp2 會好一些...

Amazon EC2 補產品線:M5 也有 NVMe 的 SSD local disk 可以用了

Amazon EC2 對 M5 instance 推出了帶 NVMe SSD 的版本:「EC2 Instance Update – M5 Instances with Local NVMe Storage (M5d)」。先推出歐美的區域:

M5d instances are available in On-Demand, Reserved Instance, and Spot form in the US East (N. Virginia), US West (Oregon), EU (Ireland), US East (Ohio), and Canada (Central) Regions. Prices vary by Region, and are just a bit higher than for the equivalent M5 instances.

算是 M3 系列的後繼產品?對應到四年前推出的 m3.medium 是 1 vCPU、3.75GB RAM 以及 8GB SSD (不過 m3.large 就變成 2 vCPU、7.5GB RAM 與 32GB SSD,硬碟的部份多了不少)。

這樣 M5 就更全面了...

Amazon EC2 推出 C5d instance:對 CPU 與 Disk 最佳化

AWS 推出了 C5d instance,同時針對 CPU 與 Disk 最佳化的機種:「EC2 Instance Update – C5 Instances with Local NVMe Storage (C5d)」。可以看出是在本地端掛上 NVMe 的方式在租用,價錢也不算高:

以帶了一個 local NVMe 的規劃來看,像是補 c3 系列產品線的感覺:

然後不是每個有 c5 的區域都會有 c5d,目前只有五區有:

C5d instances are available in On-Demand, Reserved Instance, and Spot form in the US East (N. Virginia), US West (Oregon), EU (Ireland), US East (Ohio), and Canada (Central) Regions. Prices vary by Region, and are just a bit higher than for the equivalent C5 instances.

AWS 提昇了 Amazon EBS 能提供的效能上限

AWS 宣佈 Amazon EBS 可以提供的效能往上提高了 (這邊講的是 Provisioned IOPS SSD,代號 io1):「Amazon EBS Improves Performance for io1 Volumes」。

單一 volume 的 IOPS 從 20K 變成 32K,thoughput 從 320MB/sec 變成 500MB/sec:

Today we are announcing an improvement in performance of Provisioned IOPS SSD (io1) Volumes from 20,000 IOPS to 32,000 IOPS and from 320 MB/s to 500 MB/s of throughput per volume.

應該是科技的進步帶動的 XD

Amazon RDS 支援更大的硬碟空間與更多的 IOPS

Amazon RDS 的升級:「Amazon RDS Now Supports Database Storage Size up to 16TB and Faster Scaling for MySQL, MariaDB, Oracle, and PostgreSQL Engines」。

空間上限從 6TB 變成 16TB,而且可以無痛升。另外 IOPS 上限從 30K 變成 40K:

Starting today, you can create Amazon RDS database instances for MySQL, MariaDB, Oracle, and PostgreSQL database engines with up to 16TB of storage. Existing database instances can also be scaled up to 16TB storage without any downtime.

The new storage limit is an increase from 6TB and is supported for Provisioned IOPS and General Purpose SSD storage types. You can also provision up to 40,000 IOPS for Provisioned IOPS storage volumes, an increase from 30,000 IOPS.

不過隔壁的 Amazon Aurora 還是大很多啊 (64TB),而且實際上不用管劃多大,他會自己長大:

Q: What are the minimum and maximum storage limits of an Amazon Aurora database?

The minimum storage is 10GB. Based on your database usage, your Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance. There is no need to provision storage in advance.

Linode 要推出「Linode Block Storage」了...

從「Linode Block Storage (Fremont beta)」這邊可以看到 Linode 推出 Block Storage 了,是 SSD-based,跟 Amazon EBS 的 gp2 也是 SSD-based 相同。

計價方式,價錢也相同,沒有 I/O fee:

They're affordable - $0.10 per GB (free during the beta) and no usage fees.

目前能從 1GB 開到 1TB:

How big of a Volume can I create?
Between 1 GB and 1024 GB for now. After the beta, the max volume size may be larger.

單台可以掛 8 個:

How many Volumes can I attach to a Linode at the same time?
Up to 8.

然後 2018 開始收費:

The beta is free through 2017. January 1, 2018 the meter starts running.

有了 Block Storage 後有些事情就比較好搭出來了,也不會受限於 local disk 的空間大小。

Linux 下 RAID1 的 SSD 會有讀取不平均問題

在「Unbalanced reads from SSDs in software RAID mirrors in Linux」這邊看到作者看 S.M.A.R.T. 數據時發現兩顆 SSD 硬碟組成的 RAID1 有很明顯的讀取不平均的問題:

242 Total_LBAs_Read [...] 16838224623
242 Total_LBAs_Read [...] 1698394290

原因是因為 Linux 對 RAID1 的 SSD 有不一樣的演算法:

The current state of RAID1 read balancing is kind of complex, but the important thing here in all kernels since 2012 is that if you have SSDs and at least one disk is idle, the first idle disk will be chosen.

2016 時演算法就更激進了,變成非 SSD 會:

In kernels with the late 2016 change, this widens to if at least one disk is idle, the first idle disk will be chosen, even if all mirrors are HDs.

加上 SSD 很快,這造成 loading 幾乎都在第一顆上... 這對 SSD 應該是還好啦 (理論上 SSD 的讀取不傷壽命),不過還是有點怪就是了。

InnoDB redo log 大小對效能的影響

在「Benchmark(et)ing with InnoDB redo log size」這邊看到在討論 InnoDB redo log 的大小對效能的影響 (也就是 innodb_log_file_sizeinnodb_log_files_in_group)。

開頭就有先提到重點,在新版 MySQL 裡,幾乎所有的情況比較大的 redo log 有比較好的效能 (平均值):

tl;dr - conclusions specific to my test

  1. A larger redo log improves throughput
  2. A larger redo log helps more with slower storage than with faster storage because page writeback is more of a bottleneck with slower storage and a larger redo log reduces writeback.
  3. A larger redo log can help more when the working set is cached because there are no stalls from storage reads and storage writes are more likely to be a bottleneck.
  4. InnoDB in MySQL 5.7.17 is much faster than 5.6.35 in all cases except IO-bound + fast SSD

可以看出來平均效能的提昇很顯著,不管是增加 redo log 大小還是升級到 5.7:

但作者也遇到了奇怪的效能問題。雖然平均效能提昇得很顯著,但隨著加入資料的增加,效能的 degradation 其實很嚴重,在原來的網頁上可以看到這些資訊。

The results above show average throughput and that hides a lot of interesting behavior. We expect throughput over time to not suffer from variance -- for both InnoDB and for MyRocks. For many of the results below there is a lot of variance (jitter).

所以也許現階段先加大就好 (至少寫入的效能會提昇),不需要把這個特性當作升級 MySQL 的理由。

Amazon EC2 推出 I3 系列機器

Amazon EC2 推出使用 NVMe SSD 的機器,I3 系列:「Now Available – I3 Instances for Demanding, I/O Intensive Applications」。

以東京區的價錢來看,r4.16xlarge 與 i3.16xlarge 都是 64 vCPU 與 488GB RAM。不一樣的地方只有兩個:

  • 第一個是 r4 只有 195 vCPU,而 i3 有 200 vCPU,快了一些。
  • 第二個是 i3 多了 8 個 1900 NVMe SSD。

但價錢卻只差一些 ($5.12/hr 與 $5.856/hr),如果速度可以善用 SSD 的話,跟 r4.* 比起來其實頗超值的...

Swap 對 InnoDB 的影響

Percona 的老大拿 5.7 版做實驗,確認 swap 對 InnoDB 的影響:「The Impact of Swapping on MySQL Performance」。

測試的機器是 32GB RAM,作業系統 (以及 swap) 裝在已經有點年紀的 Intel 520 SSD 上,而 MySQL 則是裝在 Intel 750 NVMe 上。透過對 innodb_buffer_pool 的調整來看情況。

可以看到設為 24GB (記憶體 75% 的量) 時很穩定的在 44K QPS 與 3.5ms (95%):

This gives us about 44K QPS. The 95% query response time (reported by sysbench) is about 3.5ms.

而當設成 32GB 的時候開始可以觀察到 swap i/o,掉到 20K QPS 與 9ms (95%):

We can see that performance stabilizes after a bit at around 20K QPS, with some 380MB/sec disk IO and 125MB/sec swap IO. The 95% query response time has grown to around 9ms.

當拉到 48GB 的時候就更掉更多,6K QPS 與 35ms (95%):

Now we have around 6K QPS. Disk IO has dropped to 250MB/sec, and swap IO is up to 190MB/sec. The 95% query response time is around 35ms.

作者發現掉的比率沒有想像中大:

When I started, I expected severe performance drop even with very minor swapping. I surprised myself by getting swap activity to more than 100MB/sec, with performance “only” halved.

這邊測試用的是 SSD,如果是傳統用磁頭的硬碟,對 random access 應該會很敏感而掉更多:

This assumes your swap space is on an SSD, of course! SSDs handle random IO (which is what paging activity usually is) much better than HDDs.

基本上還是要避免碰到 swap 啦,另外 comment 的地方剛好有提到前陣子在猜測的 best practice,測試時的 vm.swappiness 是設成 1,這應該是作者的 best practice:

Swappiness was set to 1 in this case. I was not expecting this to cause significant impact as swapping is caused by genuine (intended) missconfiguration with more memory required than available.

Archives