Backblaze 採購硬碟的策略

在「How Backblaze Buys Hard Drives」這篇裡面提到了 Backblaze 採購硬碟的策略,可以看到完全都是偏成本走向,所以裡面的策略一般個人用不太到,一般企業也不應該照抄,但拿來看看還蠻有趣的...

像是因為硬碟太多,所以硬碟的使用電量是他們在評估成本時蠻重要的一環,這點在一般的情境下不太會考慮到:

Power draw is a very important metric for us and the high speed enterprise drives are expensive in terms of power cost. We now total around 1.5 megawatts in power consumption in our centers, and I can tell you that every watt matters for reducing costs.

另外也提到了 SMR 硬碟的特性,在單位成本雖然有比較高的容量,但導致架構面需要配合 (cache),而也會有工程端的成本提昇,所以不是很愛:

SMR would give us a 10-15% capacity-to-dollar boost, but it also requires host-level management of sequential data writing. Additionally, the new archive type of drives require a flash-based caching layer. Both of these requirements would mean significant increases in engineering resources to support and thereby even more investment. So all-in-all, SMR isn’t cost-effective in our system.

成本面上,他們觀察到的現象是每季會降 5%~10%:

Ideally, I can achieve a 5-10% cost reduction per terabyte per quarter, which is a number based on historical price trends and our performance for the past 10 years.

另外提到了用 SAS controller 可以接多個 SATA 硬碟的事情 (雖然還是成本考量),但這塊也蠻有趣的:

Longer term, one thing we’re looking toward is phasing out SATA controller/port multiplier combo. This might be more technical than some of our readers want to go, but: SAS controllers are a more commonly used method in dense storage servers. Using SATA drives with SAS controllers can provide as much as a 2x improvement in system throughput vs SATA, which is important to me, even though serial ATA (SATA) port multipliers are slightly less expensive. When we started our Storage Pod construction, using SATA controller/port multiplier combo was a great way to keep costs down. But since then, the cost for using SAS controllers and backplanes has come down significantly.

AWS Fargate 推出 Spot

相較於 Amazon EC2 有 Spot Instance (可以利用 Spot Instance 的競價機制省下很多費用),這次 AWS re:InventFargate 也推出了對應的產品線:「AWS Fargate Spot Now Generally Available」。

跟 EC2 的相同,你在上面跑的應用程式必須可以接受隨時中斷服務 (i.e. 必須是 crash-safe),常見的情境是 worker 類的程式。

價錢上大約在三折 (寫這篇時 us-east-1 目前的價錢),考慮到啟動的速度比 EC2 快很多,這樣好像是個可以考慮的方案...

Amazon Elasticsearch Service 可以利用 S3 當作二級儲存空間了

Amazon Elasticsearch Service 的新功能,使用 Amazon S3 當作第二級儲存空間 (UltraWarm):「Announcing UltraWarm (Preview) for Amazon Elasticsearch Service」。

UltraWarm 需要不同的機器 (跑不同版本?),機器的規格 (vCPU 與記憶體的比率) 接近 Memory Optimized 的版本,但是貴了不少,所以需要夠大的資料量才會打平回來...

us-east-1 來看,SSD EBS 的空間成本就是 USD$0.135/GB,而傳統磁性硬碟是 USD$0.067/GB (不知道收不收 I/O 費用?),但 storage 的價錢是 USD$0.024/GB。這邊值得一提的是 Amazon S3 是 USD$0.023/GB,看起來是直接包括了 API 的呼叫費用?

AWS 新的折扣方式 (Saving Plans)

前幾天看到 AWS 推出新的折扣方式,也就是「New – Savings Plans for AWS Compute Services」這篇。裡面給了兩個新的折扣模式:

  • Compute Savings Plans
  • EC2 Instance Savings Plans

首先是 Compute Savings Plans,不限制地區,而且包括了很多類型的服務,不僅是 EC2

The plans automatically apply to any EC2 instance regardless of region, instance family, operating system, or tenancy, including those that are part of EMR, ECS, or EKS clusters, or launched by Fargate.

而 EC2 Instance Savings Plans 則是只有在 EC2 上使用,需要指定地區與機型:

Just like with RIs, your savings plan covers usage of different sizes of the same instance type (such as a c5.4xlarge or c5.large) throughout a region.

就目前的理解來看,EC2 Instance Savings Plans 其實就是換個包裝的 Regional RIs,因為 Regional RIs 本來就可以給同個 family type 使用 (沒有使用到的 c5.xlarge RI 可以拿到 c5.2xlarge 使用,照比率抵一半計算,另外一半照正常價錢)。

Compute Savings Plans 算是新的東西,你給個 hourly commit 付錢後,很多服務都可以使用這筆 hourly commit 折抵。

Amazon CloudFront 在南美大降價...

Amazon CloudFront 宣佈了第 200 個點,同時也宣佈了南美的價錢從 11 月開始降價 56%:「200 Amazon CloudFront Points of Presence + Price Reduction」。

這次的降價幅度相當驚人,南美的頻寬本來是全 CloudFront 裡最高的,現次一降就降到比整個亞洲區的價錢都還低了,不知道是弄到什麼新合約:

We are also reducing the pricing for on-demand data transfer from CloudFront by 56% for all Points of Presence in South America, effective November 1, 2019.

不過這個比較不包括 AWS 中國區的價錢 (獨立的系統),在中國的價錢是 CNY$0.30866/GB,換算成美金大約是 USD$0.04/GB。

這次加的三個點也都是在南美的新的點:

Today I am happy to announce that our global network continues to grow, and now includes 200 Points of Presence, including new locations in Argentina (198), Chile (199), and Colombia (200):

AWS 推出使用 AMD CPU 的 t3a.*

AWS 推出了 t3a.* 的 EC2 instance:「Now Available – AMD EPYC-Powered Amazon EC2 T3a Instances」。

目前開放的區域有限,但算是個開始:

You can launch T3a instances today in seven sizes in the US East (N. Virginia), US West (Oregon), Europe (Ireland), US East (Ohio), and Asia Pacific (Singapore) Regions in On-Demand, Spot, and Reserved Instance form.

價錢大約再低個 10% 左右:

Pricing is 10% lower than the equivalent existing T3 instances; see the On-Demand, Spot, and Reserved Instance pricing pages for more info.

有些超小的機器可以考慮重開的時候就順便換過去...

DynamoDB Autoscaling 的各種眉眉角角...

AdRollDynamoDB Autoscaling 的踩雷記錄,裡面有些資訊如果不是跳下去玩應該不會注意到 (魔鬼藏在細節裡的感覺):「Managing DynamoDB Autoscaling with Lambda and Cloudwatch」。

第一個提到的問題是 autoscaling 的觀察對象:

Ideally, the table should scale based on the number of requests that we are making , not the number of requests that are successful.

另外一個是 autoscaling 遇到完全不用的情況下不會 scale down,看起來是某種保護機制。但這使得平常只有拿來讀取的表格在跑完 batch job 後得自己處理 write scale down 問題:

Additionally, at the time of implementing this algorithm, the DynamoDB capacity could not be brought down automatically if the consumption was exactly zero, which can happen if you write to your table in batch instead of realtime, for example.

This meant that, when enabling autoscaling, tables that were read in realtime, but written to in batch, still needed manual intervention to bring the write capacity down after our jobs were done writing.

另外一個問題是 scale down 是有次數限制的:

Another interesting point that might bite users is that capacity decreases are an expensive operation for AWS, so they’re limited.

The number of decreases cited in the documentation can be achieved under very special conditions, since you need to have 4 decreases in the first hour of the day plus one for each of the remaining hours, for a total of 4 (first hour) + 23 (1 hourly) = 27.

後面就是自己研究什麼 algorithm 可以調整的更細,然後用 lambda 重寫... 最後省下 30% 的成本:

Here is where we detected our costs for our batch tables dropping to around 30% of the initial cost.

AdRoll 的規模應該是不小,所以為了省 30% 可以花不少力氣在上面...

AWS 宣佈 Fargate 大幅降價

AWS Fargate 大幅降價:「Announcing AWS Fargate Price Reduction By Up To 50%」、「AWS Fargate Price Reduction – Up to 50%」。

先前 Fargate 的定價超級高,基本上是不會有正常應用會丟上去的情境... 如果可以接受 Fargate 的價錢,平常就用 ECS 或是 EKS 開三到五倍的機器在旁邊待機就好了,container scale 的速度還比 Fargate 快。

這次降價幅度很大,vCPU 降 20% 其實不算小,但相較於 Memory 降的 65% 就感覺好像沒降很多 (...):

Effective Jan 07, 2019, we are reducing the price for AWS Fargate by 20% for vCPU and 65% for memory across all regions where Fargate is currently available.

us-east-1m5.24xlarge 的價錢來看是 USD$4.608/hr。以他的 96 vCPU + 384 GiB RAM 對應到 Fargate 來算是 USD$5.59296/hr,大約多 21% 的價錢。

這樣的產品價位看起來才有對應的空間...

EC2 開始陸續推出支援 100Gbps 網路的機器

AWS 開始陸陸續續在推出有 100Gbps 能力的 EC2 instance 了:「New – EC2 P3dn GPU Instances with 100 Gbps Networking & Local NVMe Storage for Faster Machine Learning + P3 Price Reduction」。

從「Amazon EC2 Instance Types」這邊可以看到先前只有 c5n.18xlarge 有支援 100Gbps 網路,現在推出的 p3dn.24xlarge 是第二個支援的...

另外是 P3 系列的降價消息,比較奇怪的是從 2018/12/06 開始生效,而不是從月初開始。另外區域與條件也有一些複雜,有常在用的人可以翻一下說明...