這次 Amazon EFS 兩個新推出的項目:Elastic Throughput 與更低的 latency

這次 re:Invent 關於 Amazon EFS 推出來的新東西,目前有看到兩個,第一個是「New – Announcing Amazon EFS Elastic Throughput」,介紹 Elastic Throughput。

傳統的 Busrting Throughput 模式會依照你的使用空間分配對應的速度,基礎是 50MB/sec per TB 計算,但可以 burst 到 100MB/sec per TB:

When burst credits are available, a file system can drive throughput up to 100 MiBps per TiB of storage, up to the Amazon EFS Region's limit, with a minimum of 100 MiBps. If no burst credits are available, a file system can drive up to 50 MiBps per TiB of storage, with a minimum of 1 MiBps.

而 Elastic Throughput 是一種高效能的模式,可以提供 3GB/sec 的讀取速度與 1GB/sec 的寫入速度:

Elastic Throughput allows you to drive throughput up to a limit of 3 GiB/s for read operations and 1 GiB/s for write operations per file system in all Regions.

但這然是有代價的,Elastic Throughput 的計費方式按照傳輸量計算,以 us-east-1 的計價來說,讀取是 $0.03/GB,寫入是 $0.06/GB。

粗粗算了一下,比較適合短時間要很大量快速讀寫的應用。如果是不在意時間的 (像是 cron job) 就不需要 Elastic Throughput... 然後 home 目錄拿來用可能是個不錯的選擇?

第二個推出的項目是不用錢的,是 Amazon EFS 效能的改進,降低 latency:「AWS announces lower latencies for Amazon Elastic File System」。

首先是讀取的效能提昇,以敘述看起來像是加上了 cache 層產生的效能改進:

Amazon EFS now delivers up to 60% lower read operation latencies when working with frequently-accessed data and metadata.

另外是對小檔寫入有做處理:

In addition, EFS now delivers up to 40% lower write operation latencies when working with small files (<64 KB) and metadata.

不過這些改進只有在新的 EFS 才會有,而且這波只有 us-east-1 上:

These enhancements are available automatically for all new EFS file systems using General Purpose mode in the US East (N. Virginia) Region, and will become available in the remaining AWS commercial regions over the coming weeks.

AWS 推出加速 Lambda 啟動速度的 Lambda SnapStart

今年 AWSre:Invent 又開始了,這一個禮拜會冒出蠻多新功能的,挑自己覺得比較有興趣得來寫。

AWS 針對 Lambda 推出 Lambda SnapStart,改善冷啟動的速度:「New – Accelerate Your Lambda Functions with Lambda SnapStart」。

他拿了一個比較明顯的例子,JavaSpring Boot,範例在「Serverless Spring Boot 2 example」這邊,冷啟動的速度可以從 6 秒降到 200ms:

SnapStart has reduced the cold start duration from over 6 seconds to less than 200 ms.

方法就是把 initialization 的程式完成後的記憶體打一份 snapshot 存起來,之後的冷啟動第一動變成是 restore 而非再 initialize:

With SnapStart, the initialization phase (represented by the Init duration that I showed you earlier) happens when I publish a new version of the function. When I invoke a function that has SnapStart enabled, Lambda restores the snapshot (represented by the Restore duration) before invoking the function handler. As a result, the total cold invoke with SnapStart is now Restore duration + Duration.

不過不是所有的應用程式都可以直接套用,有些要注意的地方,比較好理解的是連線 (像是對後端資料庫的預連線) 以及暫存檔的部份 (像是預先算好某些資料後寫到暫存檔) 都需要重新建立。

比較特別的是亂數產生器需要重新 initialize,不然會有機率產生出一樣的 random data,這個是一般開發者會忽略掉的:

When using SnapStart, any unique content that used to be generated during the initialization must now be generated after initialization in order to maintain uniqueness.

所以 AWS 有針對 SnapStart 下的 OpenSSL 修正,另外外他們也確認過 Java 的 java.security.SecureRandom 本身就沒問題:

We have updated OpenSSL’s RAND_Bytes to ensure randomness when used in conjunction with SnapStart, and we have verified that java.security.SecureRandom is already snap-resilient.

另外 AWS 也推薦可以直接讀系統的 /dev/random 或是 /dev/urandom,這樣就很自然的不會因為 snapshot 而固定,當然也就沒問題:

Amazon Linux’s /dev/random and /dev/urandom are also snap-resilient.

這個功能說不用另外收費,看起來對 Java 族群還不錯?

CloudFront 支援 JA3 資訊 (SSL/TLS fingerprint)

看到 CloudFront 宣佈支援帶入 JA3 資訊了:「Amazon CloudFront now supports JA3 fingerprint headers」:

Details: Amazon CloudFront now supports Cloudfront-viewer-ja3-fingerprint headers, enabling customers to access incoming viewer requests’ JA3 fingerprints. Customers can use the JA3 fingerprints to implement custom logic to block malicious clients or allow requests from expected clients only.

JA3 的頁面上可以看到說明,針對 SSL/TLS 這個複雜的 handshake 過程中,就可以從中得知不同的 client 實做,比起容易偽造的 User-agent 有效:

JA3 is a method for creating SSL/TLS client fingerprints that should be easy to produce on any platform and can be easily shared for threat intelligence.

像是 Tor 的 SSL/TLS 連線就會有對應的 fingerprint 可以偵測:

JA3 fingerprint for the standard Tor client:
e7d705a3286e19ea42f587b344ee6865

先前在「修正 Curl 的 TLS handshake,避開 bot 偵測機制」裡面提到的 curl-impersonate 就是反制這類偵測的方式。

AWS 開西班牙區

前幾天才寫了 AWS 開了中歐的瑞士 (在「AWS 加開中歐區 (瑞士,蘇黎世)」這邊),剛剛看到開了西班牙:「Now Open–AWS Region in Spain」。

注意到代碼不是掛在西歐 (eu-west),而是南歐 (eu-south),然後「Regions and Zones」這頁居然還沒更新...

另外機種比瑞士的豐富一些,像是 t4gr6g 這些 ARM 的機種。

歐洲的機房突然變好多,反倒是美洲暫時沒有新的大消息... (大家都愛 us-east-1 的關係?)

AWS 加開中歐區 (瑞士,蘇黎世)

AWS 在瑞士蘇黎世開了新的區域,代碼 eu-central-2:「A New AWS Region Opens in Switzerland」,其中代碼 eu-central-1 是德國,這次的歐洲中不稍微往南邊一點開。

I am pleased to announce today the opening of our 28th AWS Region: Europe (Zurich), also known by its API name: eu-central-2.

歐洲的 region 愈來愈多了,但可以看出來東歐一直還沒有點,不過以目前戰火的情況看起來應該不會太快...

Amazon EC2 AMI 的 root volume 可以直接抽換了

這個功能等了十年以上總算是出現了,Amazon EC2 的 AMI 總算是能直接抽換 root volume,不用先停掉機器:「Amazon EC2 enables easier patching of guest operating system and applications with Replace Root Volume」。

Starting today, Amazon EC2 supports the replacement of instance root volume using an updated AMI without requiring customers to stop their instance. This allows customers to easily update their applications and guest operating system, while retaining the instance store data, networking and IAM configuration.

算是 pre-container 時代會遇到的問題,後來大家都把 workaround 變成 practice 了:每次需要時候都是直接整包重新打包 (像是 Packer 這類的工具),然後用工具更新 AMI id 改開新的機器,這樣就能夠避開需要先停掉現有機器的問題...

怎麼會突然想到要回來支援這個功能 XD

Amazon EC2 的 Trn1 正式開放使用

AWS 自家研發晶片的 trn1.* 上線了:「Amazon EC2 Trn1 Instances for High-Performance Model Training are Now Available」。

先前三家雲端的廠商只有 Google Cloud PlatformTPU 可以 train & evaluate,現在 AWS 推出 AWS Trainium,補上 train 這塊的產品。其中官方宣稱可以比 GPU 架構少 50% 的計算成本:

Trainium-based EC2 Trn1 instances solve this challenge by delivering faster time-to-train while offering up to 50% cost-to-train savings over comparable GPU-based instances.

然後 PyTorchTensorFlow 都有支援:

The Neuron plugin natively integrates with popular ML frameworks, such as PyTorch and TensorFlow.

另外用 neuron-ls 可以看到 Neuron 裝置的資訊,不過沒看懂為什麼要 mask 掉 private ip 的資訊:

大型的 cluster 會使用 Amazon FSx for Lustre 整合提供服務:

For large-scale model training, Trn1 instances integrate with Amazon FSx for Lustre high-performance storage and are deployed in EC2 UltraClusters. EC2 UltraClusters are hyperscale clusters interconnected with a non-blocking petabit-scale network.

但第一波開放的區域有點少,只有萬年美東一區 us-east-1 與美西二區 us-west-2

You can launch Trn1 instances today in the AWS US East (N. Virginia) and US West (Oregon) Regions as On-Demand, Reserved, and Spot Instances or as part of a Savings Plan.

us-east-1trn1.2xlarge 的價錢是 US$1.34375/hr,但沒有實際跑過比較好像沒辦法評估到底行不行...

但總算是擺出個產品對打看看,畢竟要夠大才能去訂製這些東西。

AWS 東京區有 12TB 記憶體的機器了

月初 AWS 宣佈東京區有 u-12tb1.112xlarge 可以用了:「Amazon EC2 High Memory instances with 3, 6, 9, and 12TiB of memory are now available in Asia Pacific (Tokyo) region」。查了一下 on-demend 的價錢是 $131.733/hr,如果一個月以 720 小時來算,要 $94847.76/mo...

沒記錯的話,這種機器應該是要另外申請 limit 才能開,沒辦法說測就測。另外在公告裡面有提到 savings plan ,但沒提到 RI (reserved instance),不確定是不是還沒開 RI 讓使用者買 (不過我記得 savings plan 好像也有類似的折扣結構):

Starting today, Amazon EC2 High Memory instances with 3TiB (u-3tb1.56xlarge), 6TiB (u-6tb1.56xlarge, u-6tb1.112xlarge), 9TiB (u-9tb1.112xlarge), and 12TiB of memory (u-12tb1.112xlarge) are available in Asia Pacific (Tokyo) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options.

這種機器是用暴力解決問題的機器...