把 Blog 丟到 CloudFront 上

先前在「AWS 流量相關的 Free Tier 增加不少...」這邊有提到一般性的流量從 1GB/month per region 升到 100GB/month,另外 CloudFront 則是大幅增加,從 50GB/month (只有註冊完的前 12 個月) 提升到 1TB/month (不限制 12 個月),另外 CloudFront 到 EC2 中間的流量是不計費的。

剛剛花了點功夫把 blog 從 Cloudflare 搬到 CloudFront 上,另外先對預設的 /* 調整成 no cache,然後針對 /wp-content/* 另外加上 cache 處理,跑一陣子看看有沒有問題再說...

目前比較明顯的改善就是 latency,從 HiNet 連到免費版的 Cloudflare 會導去美國,用 CloudFront 的話就會是台灣了:

另外一方面,這樣國際頻寬的部份就會走進 AWS 的骨幹,比起透過 HiNet 自己連到美國的 PoP 上,理論上應該是會快一些...

AWS 印尼雅加達區開放使用

AWS 宣布印尼雅加達區開放使用,代碼 ap-southeast-3:「Now Open – AWS Asia Pacific (Jakarta) Region」。

要注意雅加達區需要另外 enable 才能用:

As is the case with all of the newer AWS Regions, you need to explicitly enable this one in order to be able to create and manage resources within it.

另外也同時宣布了雅加達的 AWS Direct Connect 接點:「AWS Direct Connect announces two new locations in Indonesia」。

公司主力目前用新加坡 (ap-southeast-1),好像有不少事情得做... (至少要先規劃)

AWS 的 us-west-1 與 us-west-2 炸掉

AWS 又炸了,不過這次不是死在 us-east-1,在 Hacker News 上的討論「AWS appears to be down again」可以看一看...

話說回來,前幾天在「前幾天 AWS 的 us-east-1 出事報告」才提到可以放到 us-west-2,怎麼就炸了...

手上的 SmokePing 也可以看到一些資訊,像是 HiNet 到 dynamodb.us-west-1.amazonaws.com:

HiNet 到 dynamodb.us-west-2.amazonaws.com 的:

第四台 (APOL 線路) 到 dynamodb.us-west-1.amazonaws.com 的:

第四台 (APOL 線路) 到 dynamodb.us-west-2.amazonaws.com 的:

可以看出來從網路層就了出問題,再等幾天看 AWS 出報告吧...

前幾天 AWS 的 us-east-1 出事報告

AWS 放出前幾天 us-east-1 出事的報告了:「Summary of the AWS Service Event in the Northern Virginia (US-EAST-1) Region」,Hacker News 上的討論「Summary of the AWS Service Event in the Northern Virginia (US-East-1) Region (amazon.com)」也可以看一下,裡面也有人提到儘量閃開 us-east-1

而爆炸當天的討論「AWS us-east-1 outage (amazon.com)」也可以看一看,裡面還有聊到企業文化的問題...

AWS 的 us-east-1 除了是 AWS 最早的區域以外,也是目前 AWS 內功能最多的區域 (大多數新功能在第一波都會開放 us-east-1 使用),然後也是最便宜的區域,所以會有很多人都用這個區域提更服務。

也因為這樣,這個區域也是 AWS 內最大的區域,加上 AWS 是目前最大的公有雲,導致了這個區域的很多東西會遇到以前的人都沒遇過的問題,大概每年 (或是每兩年) 就會有一次比較嚴重的 outage,算是為了價錢而選擇 us-east-1 的人要注意的。

說到價錢,如果是為了找價錢比較低的區域,另外一個可以考慮選擇是 us-west-2,出新功能與新產品時也常常會被放進第一波,目前看起來的歷史記錄應該是比 us-east-1 好不少...

這次出問題的主要是內部控制用的網路 (被稱為 internal network) 擁塞,而非客戶在用的網路 (被稱為 main network):

To explain this event, we need to share a little about the internals of the AWS network. While the majority of AWS services and all customer applications run within the main AWS network, AWS makes use of an internal network to host foundational services including monitoring, internal DNS, authorization services, and parts of the EC2 control plane.

後面也有提到因為壅塞而導致 monitoring 系統受到影響,只能就手上的 log 去分析猜測,然後逐步排除問題,而 deployment 系統也使用內部網路,所以更新的速度也快不起來...

不過基本上可以預期明年或是後年應該還是會再來一波...

對 Tor 網路的攻擊

在「Is “KAX17” performing de-anonymization Attacks against Tor Users?」這邊看到針對 Tor 網路攻擊的一些說明...

BTCMITM20 這組比較好理解,目標也比較明確:

primary motivation: financial profit (by replacing bitcoin addresses in tor exit traffic)

KAX17 這組看起來就比較像是政府單位在後面掛:

motivation: unknown; plausible: Sybil attack; collection of tor client and/or onion service IP addresses; deanonymization of tor users and/or onion services

其中可以看到同時掌握了不少 hop,這樣就很有機會一路串起來:

To provide a worst-case snapshot, on 2020–09–08 KAX17's overall tor network visibility would allow them to de-anonymize tor users with the following probabilities:

  • first hop probability (guard) : 10.34%
  • second hop probability (middle): 24.33%
  • last hop probability (exit): 4.6%

由於 Tor 是匿名網路,目前最好的防禦方式還是讓更多人參與加入節點,降低單一團體可以取得足夠組出的資料... 之後找機會整理一下跑了一年多 exit node 的想法好了。

AWS 將新的 Nitro 架構回過投來支援以前 Xen 的機種

Twitter 上看到 Jeff Barr 提到的這篇,講 AWS 決定讓新的 Nitro 架構支援舊的 Xen 機種:

原文是「Xen-on-Nitro: AWS Nitro for Legacy Instances」這篇,裡面雖然很美化的在講這件事情,但提到了幾個很現實的問題,第一個是仍然有大量使用者 (120 萬) 在用 Xen 架構的機器:

Today, we still have over 1.2 Million unique customers using Xen-based instances.

但這些機器其實愈來愈難維護,一方面是 Nitro 讓 AWS 省下很多軟體上的維護,另外一方面是幾乎不會有新的使用者用這些舊機種,在採購上面也會是問題。

However, the underlying hardware is old and it’s getting increasingly difficult to maintain support for these older hypervisor systems.

所以 EC2 的團隊把 Nitro 的 Xen 相容架構給實做出來,從 2022 年開始就可以全部都用 Nitro 系統,這樣對 EC2 團隊的維護成本就會大幅下降:

All of these innovations enable us to continue to offer many of our older instance types well past the lifetime of the original hardware. Starting in 2022, customers launching M1, M2, M3, C1, C3, R3, I2 and T1 instances will land on Nitro supported instances hardware and existing running instances will also be migrated.

技術債沒辦法消失,就用這種方式降低維護成本耗 XD

Amazon CloudWatch 推出 RUM (Real-User Monitoring) 的功能

看到 AWSCloudWatch 推出 RUM (Real-User Monitoring) 的功能:「New – Real-User Monitoring for Amazon CloudWatch」。

從畫面截圖可以看到目前支援 javascript 的版本:

一定都會有的全站分析:

另外給了 client 端的一些情況:

然後可以針對比較慢的頁面進行了解:

然後有觀看頁面的記錄:

價錢是每 1M events 是 US$10,感覺不算便宜?

CloudWatch RUM is available now and you can start using it today in ten AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore). You pay $1 for every 100K events that are collected.

功能上的競爭對手,可以想到 Datalog 有 RUM 產品,如果也是沒有 commit 的話是 US$0.65 (Per 1,000 sessions, per month)。

另外 New Relic 有 Browser Monitoring 的功能應該也是類似的東西,但價錢好像沒有單獨列出來。

Mixpanel 這邊 $25/month 的套餐可以吃 100K MTUs (monthly tracked users),每個 MTU 可以吃 1K events,好像也可以做到類似的功能,隔壁 Amplitude 的話沒列出來...

不過就帳單的立場來說是方便不少...

AWS 要推出 Graviton3 的機種了

AWS 打算要推出 Graviton3 的機種了,目前還在 preview 階段:「Join the Preview – Amazon EC2 C7g Instances Powered by New AWS Graviton3 Processors」。

目前是宣稱與前一代的 Graviton2 相比有 25% 的效能提昇,另外在浮點數與密碼相關的運算上面也會有改善 (這個效能提昇的數字應該是有指令集的幫助):

In comparison to the Graviton2, the Graviton3 will deliver up to 25% more compute performance and up to twice as much floating point & cryptographic performance. On the machine learning side, Graviton3 includes support for bfloat16 data and will be able to deliver up to 3x better performance.

另外提到了 signed pointer,可以避免 stack 被搞,不過這邊需要 OS 與 compiler 的支援,算是針對 stack 類的攻擊提出的防禦方案:

Graviton3 processors also include a new pointer authentication feature that is designed to improve security. Before return addresses are pushed on to the stack, they are first signed with a secret key and additional context information, including the current value of the stack pointer. When the signed addresses are popped off the stack, they are validated before being used. An exception is raised if the address is not valid, thereby blocking attacks that work by overwriting the stack contents with the address of harmful code. We are working with operating system and compiler developers to add additional support for this feature, so please get in touch if this is of interest to you.

然後是使用 DDR5 的記憶體:

C7g instances will be available in multiple sizes (including bare metal), and are the first in the cloud industry to be equipped with DDR5 memory. In addition to drawing less power, this memory delivers 50% higher bandwidth than the DDR4 memory used in the current generation of EC2 instances.

現在還沒看到價錢,不過有可能是跟 c6g 一樣的價位?但考慮到記憶體換架構,也有可能是貴一些的?

另外翻了一下資料,ARM 有發表過新聞稿提到 Graviton2 是 ARM 的 Cortex-M55 機種:「Designing Arm Cortex-M55 CPU on Arm Neoverse powered AWS Graviton2 Processors」,這次的 Graviton3 應該在之後完整公開後會有更多消息出來...

Amazon RDS 支援 readonly instance 當作 Multi AZ 的機器了

從來沒在用 RDS 的 Multi AZ,所以根本沒注意到居然沒這個功能:「New Multi-AZ deployment option for Amazon RDS for PostgreSQL and for MySQL; increased read capacity, lower and more consistent write transaction latency, and shorter failover time (Preview)」。

看起來 (加上印象中) 之前的 Multi AZ 是另外一台機器先開著但不能用:

In the case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby, so that database operations resume as soon as the failover is complete.

現在則是開著的機器可以跑 readonly 模式:

The standby DB instances act as automatic failover targets and can also serve read traffic to increase throughput without needing to attach additional read replica DB instances.

這樣做除了省成本外,另外因為這些 instance 平常就有 query 的量,當真的遇到 failover 切換時,warmup 的時間也會短很多 (尤其是服務夠大的時候)。

不過有些限制,首先看起來只支援 Graviton2 (ARM-based) 的機種?

The readable standby option for Amazon RDS Multi-AZ deployments works with AWS Graviton2 R6gd and M6gd DB instances (with NVMe-based SSD instance storage) and Provisioned IOPS Database Storage.

然後是支援的區域:

The Preview is available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) regions.

以及夠新的版本,MySQL 8 與 PostgreSQL 13.4 才有提供:

Amazon RDS for MySQL supports the Multi-AZ readable standby option for MySQL version 8.0.26. Amazon RDS for PostgreSQL supports the Multi-AZ readable standby option for PostgreSQL version 13.4.

但看起來還不錯,畢竟這比較接近以前在地端機房時的作法...

來看 Intel + Varnish 的單機 500Gbps 的 PR 新聞稿

在「Varnish Software Achieves 500Gbps Throughput Per Server for UHD Video Content」這邊看到 PR 稿,由 IntelVarnish 合作,宣稱達到單機 500Gbps 的 throughput 了:

According to Varnish Software, the following were the outcomes of the test:

  • 509.7 Gbps live-linear throughput, using a dual-processor configuration
  • 487.2 Gbps video-on-demand throughput, using a dual-processor configuration

白皮書在「Delivering up to 500 Gbps Throughput for Next-Gen CDNs」這頁可以用個資交換下載,不過用搜尋引擎找一下可以發現 Intel 那邊有放出 PDF (但不確定兩邊給的是不是同一份):「Delivering up to 500 Gbps Throughput for Next-Gen CDNs」。

單 CPU 的伺服器是四個 100Gbps 界面接出來,雙 CPU 的伺服器是八個 (這邊 SUT 是 system under test 的縮寫):

These client systems were connected to the CDN servers using 100 GbE links through a switch; 4x100 GbE connections for the single-processor SUT, and 8x100 GbE for the dualprocessor SUT. Testing was done using Wrk, a widely recognized open-source HTTP(S) benchmarking tool.

不過如果實際看圖會發現伺服器是兩個 100Gbps (單 CPU) 與四個 100Gbps (雙 CPU),然後 wrk 也吃了兩個或是四個 100Gbps:

在白皮書最後面也有提到測試的配置,都是在 Ubuntu 20.04 上面跑,單 CPU 用的是兩張 Intel 的 100Gbps 網卡,雙 CPU 的用的是四張 Mellanox 的 100Gbps 網卡:

3rd generation Intel Xeon Scalable testing done by Intel in September 2021. Single processor SUT configuration was based on the Supermicro SMC 110P-WTR-TNR single socket server based on Intel® Xeon® Platinum 8380 processor (microcode: 0xd000280) with 40 cores operating at 2.3 GHz. The server featured 256 GB of RAM. Intel® Hyper-Threading Technology was enabled, as was Intel® Turbo Boost Technology 2.0. Platform controller hub was the Intel C620. NUMA balancing was enabled. BIOS version was 1.1. Network connectivity was provided by two 100 GbE Intel® Ethernet Network Adapters E810. 1.2 TB of boot storage was available via an Intel SSD. Application storage totaled 3.84TB per drive and was provided by 8 Intel P5510 SSDs. The operating system was Ubuntu Linux release 20.04 LTS with kernel 5.4.0-80 generic. Compiler GCC was version 9.3.0. The workload was wrk/master (April 17, 2019), and the version of Varnish was varnishplus-6.0.8r3. Openssl v1.1.1h was also used. All traffic from clients to SUT was encrypted via TLS.

3rd generation Intel Xeon Scalable testing done by Intel in September 2021. Dual processor SUT configuration was based on the Supermicro SMC 22OU-TNR dual socket server based on Intel® Xeon® Platinum 8380 processor (microcode: 0xd000280) with 40 cores operating at 2.3 GHz. The server featured 256 GB of RAM. Intel® Hyper-Threading Technology was enabled, as was Intel® Turbo Boost Technology 2.0. Platform controller hub was the Intel C620. NUMA balancing was enabled. BIOS version was 1.1. Network connectivity was provided by four 100 GbE Mellanox MCX516A-CDAT adapters. 1.2 TB of boot storage was available via an Intel SSD. Application storage totaled 3.84TB per drive and was provided by 12 Intel P5510 SSDs. The operating system was Ubuntu Linux release 20.04 LTS with kernel 5.4.0-80- generic. Compiler GCC was version 9.3.0. The workload was wrk/master (April 17, 2019), and the version of Varnish was varnish-plus6.0.8r3. Openssl v1.1.1h was also used. All traffic from clients to SUT was encrypted via TLS.

不過馬上就會滿頭問號,四張 100Gbps 是怎麼跑到 500Gbps 的頻寬...

這份 PR 馬上就讓人想到 Netflix 先前放出來的投影片 (先前有在「Netflix 在單機服務 400Gbps 的影音流量」這篇提到),在 Netflix 的投影片裡面有提到他們在 Intel 平台上面受限於記憶體的頻寬,整台機器只能跑到 230Gbps。

另外一種猜測是,如果 Intel 與 Varnish 宣稱的 500Gbps 是算 switch 上的總流量 (有這樣算的嗎,你是 Juniper 嗎...),那這邊的 500Gbps 換算回去差不多就是減半 (還很客氣的沒把 cache 沒中需要去 origin server 拉資料的流量扣掉),跟 Netflix 在 FreeBSD 上跑出來的結果差不多啊...

坐等反駁 XDDD