Route 53 提供 100% SLA,而 Route 53 Resolver 提供 99.99% SLA

AWS 宣佈 Route 53 提供 100% 的 SLA,而 Route 53 Resolver 則提供 99.99% 的 SLA:「Amazon Route 53 Resolver Endpoints announces 99.99% Service Level Agreement and updates its Service Level Agreement for Route 53 hosted zones」。

Route 53 的部份指的是 hosted zone,參考「Amazon Route 53 Service Level Agreement」這邊,可以看到是 2022/08/29 更新的條款。

Route 53 Resolver 的部份可以參考「Amazon Route 53 Resolver Endpoints Service Level Agreement」這邊,也是 2022/08/29 更新的條款。

要注意 99.99% 是指 Multi-AZ Resolver Endpoints SLA;如果是 Single-AZ Resolver Endpoints SLA 看起來是設定在 99.5%。

不確定是不是 AWS 的第一個 100% SLA 條款...

CloudFront 在越南開點

AWS 宣佈在越南設立 edge:「AWS announces new edge locations in Vietnam」。

新的 edge 包括了各種服務,像是標題寫到的 CloudFront

The new locations offer edge networking services including Amazon CloudFront and AWS Global Accelerator that are integrated with security service AWS Shield that includes Distributed Denial of Service (DDoS), Web Application Firewall (WAF), and bot protection.

看起來是拓點,第一次進到越南,先前應該是會被導去泰國或是香港?

不過在 CloudFront 的「Global Edge Network」頁面上面意外的還沒看到越南的點 (參考 Internet Archive這邊以及 archive.today這邊)。

AWS 在阿聯開區域了 (me-central-1)

AWS 在阿聯開新的區域了:「Now Open–AWS Region in the United Arab Emirates (UAE)」。

也是首發就 3 AZ:

The Middle East (UAE) Region has three Availability Zones that you can use to reliably spread your applications across multiple data centers.

中東的第一個區域是巴林,首都麥納瑪離阿聯的首都杜拜直線距離大約 500km,算起來蠻近的... 對於主要客群是中東的用戶,看起來可以設計 Active-Active 的機制做到跨區備援?

不過更重要的還是繼續等台灣的 local zone...

AWS Support 整合到 Slack 頻道內

在「New – AWS Support App in Slack to Manage Support Cases」這邊看到的新功能,主要是下面這張圖,看起來可以直接在 Slack 上面開 private channel 跟 AWS Support 成員討論事情:

依照說明,有 Business 以上的等級都可以用:

The AWS Support App in Slack is now available to all customers with Business, Enterprise On-ramp, or Enterprise Support at no additional charge.

雖然好像不是哪麼常跟 AWS 的人打交道,但平常先掛起來好像不錯...

Google Cloud 宣佈明年關閉 IoT Core Services

Hacker News 上的「Google IoT Core will be discontinued on Aug. 16, 2023」這篇,大家在討論 Google Cloud 宣佈明年關閉 IoT Core Services 的事情,基本上討論的內容大概都想的到...

AWS 這邊的話,最近比較有印象的就是要淘汰 EC2 Classic (EC2-Classic 的狀態),但到現在還是在跑。

另外一個是把 Xen 架構 porting 到 Nitro 上 (AWS 將新的 Nitro 架構回過投來支援以前 Xen 的機種),讓原有的 Xen 應用可以繼續用。

久一點以前的 SimpleDB 到現在也還是活著,官方現在應該是主力在推 DynamoDB

兩種完全不同的作法...

CloudFront 支援 HTTP/3

雖然 HTTP/3 還沒有進到 Standard Track,但看到 CloudFront 宣佈支援 HTTP/3 了:「New – HTTP/3 Support for Amazon CloudFront」。

只要在 CloudFront 的 console 上勾選起來就可以了:

看了看 RFC 9114: HTTP/3 文件裡的描述,client 可以試著建立 UDP 版本的 QUIC 連線,但要有機制在失敗時回去用 TCPHTTP/2 或是 HTTP/1.1

A client MAY attempt access to a resource with an "https" URI by resolving the host identifier to an IP address, establishing a QUIC connection to that address on the indicated port (including validation of the server certificate as described above), and sending an HTTP/3 request message targeting the URI to the server over that secured connection. Unless some other mechanism is used to select HTTP/3, the token "h3" is used in the Application-Layer Protocol Negotiation (ALPN; see [RFC7301]) extension during the TLS handshake.

Connectivity problems (e.g., blocking UDP) can result in a failure to establish a QUIC connection; clients SHOULD attempt to use TCP-based versions of HTTP in this case.

另外一條路是在 TCP 連線時透過 HTTP header 告訴瀏覽器升級:

An HTTP origin can advertise the availability of an equivalent HTTP/3 endpoint via the Alt-Svc HTTP response header field or the HTTP/2 ALTSVC frame ([ALTSVC]) using the "h3" ALPN token.

像是這樣:

Alt-Svc: h3=":50781"

然後 client 就可以跑上 HTTP/3:

On receipt of an Alt-Svc record indicating HTTP/3 support, a client MAY attempt to establish a QUIC connection to the indicated host and port; if this connection is successful, the client can send HTTP requests using the mapping described in this document.

另外在 FAQ 裡面有提到啟用 HTTP/3 是不另外計費的,就照著本來的 request 費用算:

Q. Is there a separate charge for enabling HTTP/3?

No, there is no separate charge for enabling HTTP/3 on Amazon CloudFront distributions. HTTP/3 requests will be charged at the request pricing rates as per your pricing plan.

先開起來玩看看...

EC2-Classic 的狀態

Twitter 上看到這個:

2022/08/15 是 EC2-Classic 最後活著的時間,先前在「AWS 宣佈 EC2-Classic 退役的計畫」這篇也有提到。

在官方的公告文章「EC2-Classic Networking is Retiring – Here’s How to Prepare」這邊裡面有提到最後的日期:

On August 15, 2022 we expect all migrations to be complete, with no remaining EC2-Classic resources present in any AWS account.

不過畢竟是用 expect,應該還有客戶沒有換過去,而且 AWS 官方好像也沒有提到這個 milestone...

AWS Global Accelerator 支援 IPv6

AWSAnycast 服務 AWS Global Accelerator 宣佈支援 IPv6:「New for AWS Global Accelerator – Internet Protocol Version 6 (IPv6) Support」。

算是補功能,不過這個功能只對於「純 IPv6 環境」的使用者端有用 (沒有 DNS64 + NAT64 的轉換),目前商轉給一般使用者用的 IPv6 環境應該都還是有掛 DNS64 + NAT64 才對...

另外使用這個功能會需要 VPC 有 IPv6 能力:

To test this new feature, I need a dual-stack application with an ALB entry point. The application must be deployed in Amazon Virtual Private Cloud (Amazon VPC) and support IPv6 traffic.

然後 IPv4 會進到 IPv4 的服務裡,IPv6 則會進到 IPv6 的服務裡:

Protocol translation is not supported, neither IPv4 to IPv6 nor IPv6 to IPv4. For example, Global Accelerator will not allow me to configure a dual-stack accelerator with an IPv4-only ALB endpoint. Also, for IPv6 ALB endpoints, client IP preservation must be enabled.

短時間還用不太到,但之後應該有機會...

Amazon EC2 推出 r6a 的機種

Amazon EC2 推出了 r6a (AMD) 的機種:「New – Amazon EC2 R6a Instances Powered by 3rd Gen AMD EPYC Processors for Memory-Intensive Workloads」。

看了一下美東的價錢,r6ar5a 貴了一點點,其中 r6a.24xlarge 是 US$5.4432/hour,而 r5a.24xlarge 則是 US$5.424/hour。

官方宣稱與 r5a 相比有 35% 的 C/P 值改善:

Up to 35 percent higher price performance per vCPU versus comparable R5a instances

然後可以提供更大的機器,之前 r5a 的最大台機器是 r5a.24xlarge (96 vCPU + 768GB RAM),但 r6a 則是拉到了 r6a.48xlarge 或是 r6a.metal (192 vCPU + 1536GB RAM)。

不過目前看起來支援的地區還有限,目前工作用到的 ap-southeast-1 看起來還要再等一下:

You can launch R6a instances today in the AWS US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Mumbai) and Europe (Frankfurt, Ireland) as On-Demand, Spot, and Reserved Instances or as part of a Savings Plan.

EC2 Auto Scaling 可以透過機械學習來預開機器了

Amazon EC2 的新功能,Auto Scaling 可以透過歷史資料分析預開機器了:「Amazon EC2 Auto Scaling customers can now monitor their predictive scaling policy using Amazon CloudWatch」。

這是吃 CloudWatch 的資料做的,在「Predictive scaling for Amazon EC2 Auto Scaling」這份文件裡面有提到:

Predictive scaling uses machine learning to predict capacity requirements based on historical data from CloudWatch.

還是希望把 Auto Scaling 整合 Lambda 啦,這樣就是 turing machine 了...