AWS 推出了 Anomaly Detection

AWS 這次在 CloudWatch 上推出了新功能,可以直接透過機器學習的演算法,對 CloudWatch 所記錄的值提供異常偵測 (anomaly detection) 的能力:「New – Amazon CloudWatch Anomaly Detection」,對應的文件則可以在「Using CloudWatch Anomaly Detection」這邊讀到。

這個功能可以抓一個預期的區間出來,然後針對超出區間時發出警報:

這邊感覺把以前很多工作自動化掉了,省了很多事情...

Amazon 又把一個大部門的 Oracle 系統轉移到了 AWS 自家的系統

算是 AWS 的 PR 稿,在老闆對雲的宣示與政治正確下本來就會陸陸續續轉過去...

這次是 Amazon 的 Consumer Business 從 Oracle 的系統換到 AWS 自己的系統:「Migration Complete – Amazon’s Consumer Business Just Turned off its Final Oracle Database」。

原先有 75 PB 的資料與 7500 個 database:

We migrated 75 petabytes of internal data stored in nearly 7,500 Oracle databases to multiple AWS database services including Amazon DynamoDB, Amazon Aurora, Amazon Relational Database Service (RDS), and Amazon Redshift.

其中一個優點是省成本,但是也投入了超過一百個團隊一起參與轉移,會需要攤多久才會打平,這點在沒有看到內部財務資料其實沒辦法判斷,而且工程資源的稀缺性也是個沒有被看到的資訊:

Cost Reduction – We reduced our database costs by over 60% on top of the heavily discounted rate we negotiated based on our scale. Customers regularly report cost savings of 90% by switching from Oracle to AWS.

More than 100 teams in Amazon’s Consumer business participated in the migration effort.

然後 latency 的下降其實也只能參考,因為轉移系統的時候也會順便改寫,有多少是因為 AWS 服務本身帶出來,在沒有內部資料看不出來:

Performance Improvements – Latency of our consumer-facing applications was reduced by 40%.

管理成本算是裡面唯一可以參考的,畢竟是搬到可延展擴充的服務:

Administrative Overhead – The switch to managed services reduced database admin overhead by 70%.

另外,沒寫的東西比較有趣,像是他們沒有選擇 Athena 而是用 Redshift,看起來像是先轉上去,其他找機會再說...

EC2 的 ARM 也支援 bare metal 版本了...

AWSARM 平台的採用速度比想像中快,昨天發表了 bare metal 版本 a1.metal:「Now Available: Bare Metal Arm-Based EC2 Instances」。

相較於 x86 系列提供的 bare metal,ARM 這邊提供的機器不算大台,只有 16 核心與 32 GB 的記憶體,所以主要的用途應該是在於可以存取系統底層的資訊,或是因為軟體的授權不支援虛擬化而需要租用 bare metal:

  • need access to physical resources and low-level hardware features, such as performance counters, that are not always available or fully supported in virtualized environments,
  • are intended to run directly on the hardware, or licensed and supported for use in non-virtualized environments.

這次的 a1.metal 共八區可以用:

You can start using a1.metal instances today in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Mumbai), and Asia Pacific (Sydney).

EC2 推出 18TB 與 24TB 的機器...

AWS 又把機器給生出來啦:「EC2 High Memory Update – New 18 TB and 24 TB Instances」。

一樣是限制要買三年 RI 才能用,不過價錢頁面上好像還在更新,在「Amazon EC2 Dedicated Hosts Pricing」只看到了之前就公佈的 12TB 價錢,還沒看到 18TB 與 24TB 的部份...

然後以前會跟同事說,資料小於這台機器記憶體大小的不能叫 big data (當時是 12TB),現在升級到 24TB 啦...

Google Cloud Platform 在台灣的機房可以開 Standard Network 的機器了

Google Cloud Platform 一開始是提供 Premium Network,會透過 Google 自家的網路骨幹連到最近的點,然後再透過當地的機房交換出去,這樣可以確保頻寬的穩定性,但成本當然也就比較高...

後來提供了 Standard Network 則是從機房出去後就直接交換,成本會比較低 (參考「Network Service Tiers - Custom Cloud Network」這篇),但在台灣的機房一直都沒有提供 Standard Network (好像是需要另外申請?),所以我每個月月底的時候都會測一下看看開放了沒... 然後剛剛發現可以開起來了,不確定是已經全開了還是分批開。

測了一下發現網路相當... 爛?是還在調整嗎...

像是 1.1.1.1 的 latency 很高 (自家的 8.8.8.8 當然就沒這個問題):

PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=49 time=48.9 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=49 time=48.5 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=49 time=48.5 ms

--- 1.1.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 48.554/48.691/48.964/0.193 ms

然後 168.95.1.1139.175.1.1 也都很差 (61.31.1.1 不給 ping):

PING 168.95.1.1 (168.95.1.1) 56(84) bytes of data.
64 bytes from 168.95.1.1: icmp_seq=1 ttl=239 time=21.2 ms
64 bytes from 168.95.1.1: icmp_seq=2 ttl=239 time=21.3 ms
64 bytes from 168.95.1.1: icmp_seq=3 ttl=239 time=21.4 ms

--- 168.95.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 21.276/21.367/21.454/0.072 ms
PING 139.175.1.1 (139.175.1.1) 56(84) bytes of data.
64 bytes from 139.175.1.1: icmp_seq=1 ttl=53 time=63.4 ms
64 bytes from 139.175.1.1: icmp_seq=2 ttl=53 time=62.9 ms
64 bytes from 139.175.1.1: icmp_seq=3 ttl=53 time=62.9 ms

--- 139.175.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 62.967/63.139/63.455/0.303 ms

不過學術網路倒是還不錯:

PING 140.112.2.2 (140.112.2.2) 56(84) bytes of data.
64 bytes from 140.112.2.2: icmp_seq=1 ttl=51 time=5.13 ms
64 bytes from 140.112.2.2: icmp_seq=2 ttl=51 time=4.40 ms
64 bytes from 140.112.2.2: icmp_seq=3 ttl=51 time=4.52 ms

--- 140.112.2.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 4.405/4.690/5.138/0.325 ms
PING 140.113.250.135 (140.113.250.135) 56(84) bytes of data.
64 bytes from 140.113.250.135: icmp_seq=1 ttl=55 time=5.87 ms
64 bytes from 140.113.250.135: icmp_seq=2 ttl=55 time=5.97 ms
64 bytes from 140.113.250.135: icmp_seq=3 ttl=55 time=6.11 ms

--- 140.113.250.135 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 5.872/5.987/6.119/0.135 ms
PING 140.117.11.1 (140.117.11.1) 56(84) bytes of data.
64 bytes from 140.117.11.1: icmp_seq=1 ttl=242 time=9.52 ms
64 bytes from 140.117.11.1: icmp_seq=2 ttl=242 time=9.17 ms
64 bytes from 140.117.11.1: icmp_seq=3 ttl=242 time=9.20 ms

--- 140.117.11.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 9.172/9.298/9.521/0.176 ms

有需要的人可以測試看看了...

EC2 要從 Instance 數量限制改成 vCPU 數量限制

這算是 AWS 的保護機制,在 Amazon EC2 上能開的機器數量都是有限制的。

打算要用新的 vCPU 數量限制取代舊的 Instance 數量限制:「Using new vCPU-based On-Demand Instance limits with Amazon EC2」,然後現在可以先加入:「vCPU-based On-Demand Instance Limits are Now Available in Amazon EC2」。

這次改善的問題是,以往 m5.largem5.xlarge 是兩個不同的限制,所以用起來會比較卡,現在則改成用 vCPU 來管理。

這次的架構是改成,一般性的機器會有一個 vCPU 數量限制,其他不同特性的各自有自己的 vCPU 數量限制:

In addition to now measuring usage in number of vCPUs, there will only be five different On-Demand Instance limits—one limit that governs the usage of standard instance families such as A, C, D, H, I, M, R, T, and Z, and one limit per accelerated instance family for FPGA (F), graphic-intensive (G), general purpose GPU (P), and special memory optimized (X) instances.

9/24 可以先手動加入,會拿你現在的量會換算過去,然後 10/24 會全部都轉過去:

During a transition period from September 24, 2019, through October 24, 2019, you can opt in to receive vCPU-based instance limits. When you opt in, EC2 automatically computes your new limits, giving you access to launch at least the same number of instances (if not more) than you do currently. Beginning October 24, 2019, all accounts will switch to vCPU-based instance limits, and the current count-based instance limits will no longer be supported. Although the switchover will not impact your ability to launch EC2 instances, you should familiarize yourself with the new On-Demand Instance limits experience and opt into vCPU limits at a time of your choosing.

應該是會方便一些...

AWS 上用空間買 IOPS 的故事...

在「A web performance issue」這邊講到 Mozilla 的系統產生效能問題,後續的 trouble shooting 以及解決問題的方案。

這個系統跑在 AWS 上,在一連串確認後發現是 RDS 所使用的 EBS 的 IOPS 滿了:

After reading a lot of documentation about Amazon’s RDS set-up I determined that slow downs in the database were related to IOPS spikes. Amazon gives you 3 IOPS per Gb and with a storage of 1 Terabyte we had 3,000 IOPS as our baseline. The graph below shows that at times we would get above that max baseline.

然後大家對於解法都差不多,因為 Provisioned IOPS 太貴,所以直接加大空間換 IOPS 出來 (因為 General SSD 裡 1 GB 給 3 IOPS):

To increase the IOPS baseline we could either increase the storage size or switch from General SSD to Provisioned IOPS storage. The cost of the different storage type was much higher so we decided to double our storage, thus, doubling our IOPS baseline. You can see in the graph below that we’re constantly above our previous baseline. This change helped Treeherder’s performance a lot.

然後再設警告機制,下次就可以提前再拉昇:

In order to prevent getting into such a state in the future, I also created a CloudWatch alert. We would get alerted if the combined IOPS is greater than 5,700 IOPS for 6 datapoints within 10 minutes.

不過 General SSD 的 IOPS 是沒有 100% 保證的,只有這樣寫:

AWS designs gp2 volumes to deliver 90% of the provisioned performance 99% of the time.

大多數的情況應該是夠用啦...

寫了一個可以用 '/' 在 AWS 上快速切換服務的小工具

AWS Management Console 上切換服務需要用到滑鼠,而這個 Userscript 工具提供了 / 快速鍵可以直接拉出服務選單,另外也可以用 Esc 鍵關閉服務選單:「AWS Web Console Service Shortkeys」。

這個套件應該是支援多個瀏覽器,但是需要先安裝 Tampermonkey 這類可以跑 Userscript 的套件。

主要是常常在切的時候發現需要拿滑鼠,寫了這個 script 後多了一個方式可以用,而不需要把手移開鍵盤,會順手一些...

不過還是希望這個功能直接變成內建的 :o

Amazon EC2 推出 G4 系列機器

這次 Amazon EC2 更新了 G 系列的機器,其實會特地寫文章主要是在複習 P 系列與 G 系列的差異 (每次都記不起來到底哪個是給科學運算用的):「Now Available – EC2 Instances (G4) with NVIDIA T4 Tensor Core GPUs」。

EC2 上的 GPU Instances 分成兩條線在發展,一條是 P 系列,另外一條是 G 系列,都是使用 Nvidia 的產品線。

從「Amazon EC2 Instance Types」這邊的「Accelerated Computing」可以看到每條產品線用了哪些型號 (扣掉 FPGA 的 F1):

  • P3:Up to 8 NVIDIA Tesla V100 GPUs, each pairing 5,120 CUDA Cores and 640 Tensor Cores
  • P2:High-performance NVIDIA K80 GPUs, each with 2,496 parallel processing cores and 12GiB of GPU memory
  • G4:NVIDIA T4 Tensor Core GPUs
  • G3:NVIDIA Tesla M60 GPUs, each with 2048 parallel processing cores and 8 GiB of video memory

查了資料發現雖然時間點不同,但這四個都列在「Nvidia Tesla」這邊,裡面也沒有太多說明,所以還是看不出來差異,之後要碰到的時候再來還這個知識債好了...