CloudFront 支援 HTTP/3

雖然 HTTP/3 還沒有進到 Standard Track,但看到 CloudFront 宣佈支援 HTTP/3 了:「New – HTTP/3 Support for Amazon CloudFront」。

只要在 CloudFront 的 console 上勾選起來就可以了:

看了看 RFC 9114: HTTP/3 文件裡的描述,client 可以試著建立 UDP 版本的 QUIC 連線,但要有機制在失敗時回去用 TCPHTTP/2 或是 HTTP/1.1

A client MAY attempt access to a resource with an "https" URI by resolving the host identifier to an IP address, establishing a QUIC connection to that address on the indicated port (including validation of the server certificate as described above), and sending an HTTP/3 request message targeting the URI to the server over that secured connection. Unless some other mechanism is used to select HTTP/3, the token "h3" is used in the Application-Layer Protocol Negotiation (ALPN; see [RFC7301]) extension during the TLS handshake.

Connectivity problems (e.g., blocking UDP) can result in a failure to establish a QUIC connection; clients SHOULD attempt to use TCP-based versions of HTTP in this case.

另外一條路是在 TCP 連線時透過 HTTP header 告訴瀏覽器升級:

An HTTP origin can advertise the availability of an equivalent HTTP/3 endpoint via the Alt-Svc HTTP response header field or the HTTP/2 ALTSVC frame ([ALTSVC]) using the "h3" ALPN token.

像是這樣:

Alt-Svc: h3=":50781"

然後 client 就可以跑上 HTTP/3:

On receipt of an Alt-Svc record indicating HTTP/3 support, a client MAY attempt to establish a QUIC connection to the indicated host and port; if this connection is successful, the client can send HTTP requests using the mapping described in this document.

另外在 FAQ 裡面有提到啟用 HTTP/3 是不另外計費的,就照著本來的 request 費用算:

Q. Is there a separate charge for enabling HTTP/3?

No, there is no separate charge for enabling HTTP/3 on Amazon CloudFront distributions. HTTP/3 requests will be charged at the request pricing rates as per your pricing plan.

先開起來玩看看...

EC2-Classic 的狀態

Twitter 上看到這個:

2022/08/15 是 EC2-Classic 最後活著的時間,先前在「AWS 宣佈 EC2-Classic 退役的計畫」這篇也有提到。

在官方的公告文章「EC2-Classic Networking is Retiring – Here’s How to Prepare」這邊裡面有提到最後的日期:

On August 15, 2022 we expect all migrations to be complete, with no remaining EC2-Classic resources present in any AWS account.

不過畢竟是用 expect,應該還有客戶沒有換過去,而且 AWS 官方好像也沒有提到這個 milestone...

AWS Global Accelerator 支援 IPv6

AWSAnycast 服務 AWS Global Accelerator 宣佈支援 IPv6:「New for AWS Global Accelerator – Internet Protocol Version 6 (IPv6) Support」。

算是補功能,不過這個功能只對於「純 IPv6 環境」的使用者端有用 (沒有 DNS64 + NAT64 的轉換),目前商轉給一般使用者用的 IPv6 環境應該都還是有掛 DNS64 + NAT64 才對...

另外使用這個功能會需要 VPC 有 IPv6 能力:

To test this new feature, I need a dual-stack application with an ALB entry point. The application must be deployed in Amazon Virtual Private Cloud (Amazon VPC) and support IPv6 traffic.

然後 IPv4 會進到 IPv4 的服務裡,IPv6 則會進到 IPv6 的服務裡:

Protocol translation is not supported, neither IPv4 to IPv6 nor IPv6 to IPv4. For example, Global Accelerator will not allow me to configure a dual-stack accelerator with an IPv4-only ALB endpoint. Also, for IPv6 ALB endpoints, client IP preservation must be enabled.

短時間還用不太到,但之後應該有機會...

Amazon EC2 推出 r6a 的機種

Amazon EC2 推出了 r6a (AMD) 的機種:「New – Amazon EC2 R6a Instances Powered by 3rd Gen AMD EPYC Processors for Memory-Intensive Workloads」。

看了一下美東的價錢,r6ar5a 貴了一點點,其中 r6a.24xlarge 是 US$5.4432/hour,而 r5a.24xlarge 則是 US$5.424/hour。

官方宣稱與 r5a 相比有 35% 的 C/P 值改善:

Up to 35 percent higher price performance per vCPU versus comparable R5a instances

然後可以提供更大的機器,之前 r5a 的最大台機器是 r5a.24xlarge (96 vCPU + 768GB RAM),但 r6a 則是拉到了 r6a.48xlarge 或是 r6a.metal (192 vCPU + 1536GB RAM)。

不過目前看起來支援的地區還有限,目前工作用到的 ap-southeast-1 看起來還要再等一下:

You can launch R6a instances today in the AWS US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Mumbai) and Europe (Frankfurt, Ireland) as On-Demand, Spot, and Reserved Instances or as part of a Savings Plan.

EC2 Auto Scaling 可以透過機械學習來預開機器了

Amazon EC2 的新功能,Auto Scaling 可以透過歷史資料分析預開機器了:「Amazon EC2 Auto Scaling customers can now monitor their predictive scaling policy using Amazon CloudWatch」。

這是吃 CloudWatch 的資料做的,在「Predictive scaling for Amazon EC2 Auto Scaling」這份文件裡面有提到:

Predictive scaling uses machine learning to predict capacity requirements based on historical data from CloudWatch.

還是希望把 Auto Scaling 整合 Lambda 啦,這樣就是 turing machine 了...

Hacker News 前幾天炸很久的 root cause

前幾天 Hacker News 炸了很久,如果是從 Twitter 上的資料來看,是從 2022/07/08 14:08 UTC 這篇:

中間還原失敗 (2022/07/08 17:35 UTC):

到最後恢復 (2022/07/08 20:48 UTC):

Twitter 這邊的資料看起來差不多是六個小時多,以一個應該是只有 database 需要還原的站台來說的確是蠻久的,所以後續在「HN is up again」這邊就有在討論原因,裡面 HN 的老大 dang 也有提到 downtime 是七個小時多:

8 hours of downtime, but not data loss, since there was no data to lose during the downtime.

Last post before we went down (2022-07-08 12:46:04 UTC): https://news.ycombinator.com/item?id=32026565

First post once we were back up (2022-07-08 20:30:55 UTC): https://news.ycombinator.com/item?id=32026571 (hey, that's this thread! how'd you do that, tpmx?)

So, 7h 45m of downtime. What we don't know is how many posts (or votes, etc.) happened after our last backup, and were therefore lost. The latest vote we have was at 2022-07-08 12:46:05 UTC, which is about the same as the last post.

There can't be many lost posts or votes, though, because I checked HN Search (https://hn.algolia.com/) just before we brought HN back up, and their most recent comment and story were behind ours. That means our last backup on the ill-fated server was taken after the last API update (HN Search relies on our API), and the API gets updated every 30 seconds.

I'm not saying that's a rock-solid argument, but it suggests that 30 seconds is an upper bound on how much data we lost.

另外大家就在找 dang 的回應是什麼 (畢竟是第一手資料),用 Ctrl-F 找一下就看到有趣的猜測,從 32028511 這個節點可以看到這串有趣的討論,首先是 mikeiem

You are never going to guess how long the HN SSDs were in the servers... never ever... OK... I'll tell you: 4.5years. I am not even kidding.

然後是 kabdib 的回應:

Let me narrow my guess: They hit 4 years, 206 days and 16 hours . . . or 40,000 hours.

And that they were sold by HP or Dell, and manufactured by SanDisk.

Do I win a prize?

(None of us win prizes on this one).

接著就是 dang 說他覺得這個猜測很有可能:

Wow. It's possible that you have nailed this.

Edit: here's why I like this theory. I don't believe that the two disks had similar levels of wear, because the primary server would get more writes than the standby, and we switched between the two so rarely. The idea that they would have failed within hours of each other because of wear doesn't seem plausible.

But the two servers were set up at the same time, and it's possible that the two SSDs had been manufactured around the same time (same make and model). The idea that they hit the 40,000 hour mark within a few hours of each other seems entirely plausible.

Mike of M5 (mikiem in this thread) told us today that it "smelled like a timing issue" to him, and that is squarely in this territory.

後續他也從自家的 /newest 裡面撈了相關的資料出來,依照他撈出來的關鍵字,看起來是用 HPE 出的 SSD:

It's also an example of the dharma of /newest – the rising and falling away of stories that get no attention:

HPE releases urgent fix to stop enterprise SSDs conking out at 40K hours - https://news.ycombinator.com/item?id=22706968 - March 2020 (0 comments)

HPE SSD flaw will brick hardware after 40k hours - https://news.ycombinator.com/item?id=22697758 - March 2020 (0 comments)

Some HP Enterprise SSD will brick after 40000 hours without update - https://news.ycombinator.com/item?id=22697001 - March 2020 (1 comment)

HPE Warns of New Firmware Flaw That Bricks SSDs After 40k Hours of Use - https://news.ycombinator.com/item?id=22692611 - March 2020 (0 comments)

HPE Warns of New Bug That Kills SSD Drives After 40k Hours - https://news.ycombinator.com/item?id=22680420 - March 2020 (0 comments)

(there's also https://news.ycombinator.com/item?id=32035934, but that was submitted today)

這次 downtime 看起來很像是中了 SSD firmware bug,目前看起來先搬到 EC2 上面了:

$ host news.ycombinator.com
news.ycombinator.com has address 50.112.136.166
$ host 50.112.136.166      
166.136.112.50.in-addr.arpa domain name pointer ec2-50-112-136-166.us-west-2.compute.amazonaws.com.

看討論串應該是暫時性的?

Amazon EC2 有 Mac (M1) 機種可以租用了

2020 年年底的時候 AWS 推出用 Mac mini 配合搭建出 Mac (Intel) 機種:「Amazon EC2 推出 Mac Instance」,當初有計畫在 2021 年推出 M1 的版本:

Apple M1 Chip – EC2 Mac instances with the Apple M1 chip are already in the works, and planned for 2021.

不過就沒什麼意外的 delay 了,這次則是推出了 M1 的版本:「New – Amazon EC2 M1 Mac Instances」。

依照說明看起來還是 Mac mini,掛上 AWS Nitro System

EC2 Mac instances are dedicated Mac mini computers attached through Thunderbolt to the AWS Nitro System, which lets the Mac mini appear and behave like another EC2 instance.

然後跟 Intel 版本一樣,因為是掛進 Dedicated Hosts 的計價方式,雖然是以秒計費,但還是設定最低 24 小時的租用時間限制:

Amazon EC2 Mac instances are available as Dedicated Hosts through both On Demand and Savings Plans pricing models. The Dedicated Host is the unit of billing for EC2 Mac instances. Billing is per second, with a 24-hour minimum allocation period for the Dedicated Host to comply with the Apple macOS Software License Agreement. At the end of the 24-hour minimum allocation period, the host can be released at any time with no further commitment.

Intel 版本代號是 mac1,只有一種機種 mac1.metal,M1 版本代號是 mac2,也只有一種機種 mac2.metal

以最經典的美東一區 us-east-1 來看,mac1.metal 的 on-demand 價錢是 US$1.083/hour,mac2.metal 則是 US$0.65/hour,差不多是 60% 的價錢,便宜不少,大概是反應在硬體攤提與電費成本上了。

另外目前大家用 M1 的經驗來看,Rostta 2 未必會比原生的機器慢多少,雖然 mac1.metal 是 12 cores,mac2.metal 是 8 core,但以雲上面一定要用 Mac 跑的應用來說,馬上想的到的還是綁在 Apple 環境裡 CI 類的應用?

目前看起來主要的問題還是 24 小時的最小計費單位讓彈性低不少...

AWS 宣佈了 API 的 TLS 1.0/1.1 日落期

AWS 宣佈了 API 的 TLS 1.0/1.1 日落期:「TLS 1.2 to become the minimum TLS protocol level for all AWS API endpoints」。

公告裡提到是 2023/06/28:

This update means you will no longer be able to use TLS versions 1.0 and 1.1 with all AWS APIs in all AWS Regions by June 28, 2023.

TLS 1.0 目前還堪用的應該是 AES + CBC 類的 cipher,在正確實做 mitigation 下加減可以用:

對於像是 Java 6 環境這類很老舊的系統,如果真的無法升級的話,可以想到 workaround 的方法是透過 self-signed CA + TLS proxy 來幫忙把 TLS 1.0 的連線請求解開,重包成 TLS 1.2 的連線。

AWS 在同一區不同 AZ 頻寬費用的特別地方

剛好在處理 AWS 同一個 region 下不同 AZ 之間的傳輸費用,跟帳單互相比對,查了以後才發現跟想像中不一樣,這邊以 EC2 為例子,可以參考「Amazon EC2 On-Demand Pricing」這頁裡面的說明。

從 Internet 端進 AWS 的流量是不計費的:

Data Transfer IN To Amazon EC2 From Internet
All data transfer in $0.00 per GB

但從 AZ 進到另外一個 AZ 時,in 與 out 都要收費:

Data transferred "in" to and "out" from Amazon EC2, Amazon RDS, Amazon Redshift, Amazon DynamoDB Accelerator (DAX), and Amazon ElastiCache instances, Elastic Network Interfaces or VPC Peering connections across Availability Zones in the same AWS Region is charged at $0.01/GB in each direction.

所以直接用 US$0.01/GB 的計算是不夠的,得用 US$0.02/GB 來計算。

同樣的,如果是 Public IP 與 Elastic IP 也都是雙向收費,跨 VPC 也是雙向收費,所以都要用 US$0.02/GB 來算:

IPv4: Data transferred “in” to and “out” from public or Elastic IPv4 address is charged at $0.01/GB in each direction.
IPv6: Data transferred “in” to and “out” from an IPv6 address in a different VPC is charged at $0.01/GB in each direction.

AWS 也推出了 GitHub Copilot 的競爭對手 Amazon CodeWhisperer

AWS 推出了 Amazon CodeWhisperer,可以看做是 GitHub Copilot 的競爭產品:「Now in Preview – Amazon CodeWhisperer- ML-Powered Coding Companion」,在 Hacker News 上的討論還不多:「Copilot just got company: Amazon announced Codewhisperer (amazon.com)」。

目前還是 Preview 所以是免費的,但也還沒有提供價錢:

During the preview period, developers can use CodeWhisperer for free.

另外目前提供的程式語言只有 PythonJavaJavaScript

The preview supports code written in Python, Java, and JavaScript, using VS Code, IntelliJ IDEA, PyCharm, WebStorm, and AWS Cloud9. Support for the AWS Lambda Console is in the works and should be ready very soon.

至於 training 的資料集,這邊有提到的是 open source 專案與 Amazon 自家的東西:

CodeWhisperer code generation is powered by ML models trained on various data sources, including Amazon and open-source code.

開發應該需要一段時間,不知道是剛好,還是被 GitHub Copilot 轉 GA 的事件強迫推出 Preview 版...