把 Blog 丟到 CloudFront 上

先前在「AWS 流量相關的 Free Tier 增加不少...」這邊有提到一般性的流量從 1GB/month per region 升到 100GB/month,另外 CloudFront 則是大幅增加,從 50GB/month (只有註冊完的前 12 個月) 提升到 1TB/month (不限制 12 個月),另外 CloudFront 到 EC2 中間的流量是不計費的。

剛剛花了點功夫把 blog 從 Cloudflare 搬到 CloudFront 上,另外先對預設的 /* 調整成 no cache,然後針對 /wp-content/* 另外加上 cache 處理,跑一陣子看看有沒有問題再說...

目前比較明顯的改善就是 latency,從 HiNet 連到免費版的 Cloudflare 會導去美國,用 CloudFront 的話就會是台灣了:

另外一方面,這樣國際頻寬的部份就會走進 AWS 的骨幹,比起透過 HiNet 自己連到美國的 PoP 上,理論上應該是會快一些...

來看 Intel + Varnish 的單機 500Gbps 的 PR 新聞稿

在「Varnish Software Achieves 500Gbps Throughput Per Server for UHD Video Content」這邊看到 PR 稿,由 IntelVarnish 合作,宣稱達到單機 500Gbps 的 throughput 了:

According to Varnish Software, the following were the outcomes of the test:

  • 509.7 Gbps live-linear throughput, using a dual-processor configuration
  • 487.2 Gbps video-on-demand throughput, using a dual-processor configuration

白皮書在「Delivering up to 500 Gbps Throughput for Next-Gen CDNs」這頁可以用個資交換下載,不過用搜尋引擎找一下可以發現 Intel 那邊有放出 PDF (但不確定兩邊給的是不是同一份):「Delivering up to 500 Gbps Throughput for Next-Gen CDNs」。

單 CPU 的伺服器是四個 100Gbps 界面接出來,雙 CPU 的伺服器是八個 (這邊 SUT 是 system under test 的縮寫):

These client systems were connected to the CDN servers using 100 GbE links through a switch; 4x100 GbE connections for the single-processor SUT, and 8x100 GbE for the dualprocessor SUT. Testing was done using Wrk, a widely recognized open-source HTTP(S) benchmarking tool.

不過如果實際看圖會發現伺服器是兩個 100Gbps (單 CPU) 與四個 100Gbps (雙 CPU),然後 wrk 也吃了兩個或是四個 100Gbps:

在白皮書最後面也有提到測試的配置,都是在 Ubuntu 20.04 上面跑,單 CPU 用的是兩張 Intel 的 100Gbps 網卡,雙 CPU 的用的是四張 Mellanox 的 100Gbps 網卡:

3rd generation Intel Xeon Scalable testing done by Intel in September 2021. Single processor SUT configuration was based on the Supermicro SMC 110P-WTR-TNR single socket server based on Intel® Xeon® Platinum 8380 processor (microcode: 0xd000280) with 40 cores operating at 2.3 GHz. The server featured 256 GB of RAM. Intel® Hyper-Threading Technology was enabled, as was Intel® Turbo Boost Technology 2.0. Platform controller hub was the Intel C620. NUMA balancing was enabled. BIOS version was 1.1. Network connectivity was provided by two 100 GbE Intel® Ethernet Network Adapters E810. 1.2 TB of boot storage was available via an Intel SSD. Application storage totaled 3.84TB per drive and was provided by 8 Intel P5510 SSDs. The operating system was Ubuntu Linux release 20.04 LTS with kernel 5.4.0-80 generic. Compiler GCC was version 9.3.0. The workload was wrk/master (April 17, 2019), and the version of Varnish was varnishplus-6.0.8r3. Openssl v1.1.1h was also used. All traffic from clients to SUT was encrypted via TLS.

3rd generation Intel Xeon Scalable testing done by Intel in September 2021. Dual processor SUT configuration was based on the Supermicro SMC 22OU-TNR dual socket server based on Intel® Xeon® Platinum 8380 processor (microcode: 0xd000280) with 40 cores operating at 2.3 GHz. The server featured 256 GB of RAM. Intel® Hyper-Threading Technology was enabled, as was Intel® Turbo Boost Technology 2.0. Platform controller hub was the Intel C620. NUMA balancing was enabled. BIOS version was 1.1. Network connectivity was provided by four 100 GbE Mellanox MCX516A-CDAT adapters. 1.2 TB of boot storage was available via an Intel SSD. Application storage totaled 3.84TB per drive and was provided by 12 Intel P5510 SSDs. The operating system was Ubuntu Linux release 20.04 LTS with kernel 5.4.0-80- generic. Compiler GCC was version 9.3.0. The workload was wrk/master (April 17, 2019), and the version of Varnish was varnish-plus6.0.8r3. Openssl v1.1.1h was also used. All traffic from clients to SUT was encrypted via TLS.

不過馬上就會滿頭問號,四張 100Gbps 是怎麼跑到 500Gbps 的頻寬...

這份 PR 馬上就讓人想到 Netflix 先前放出來的投影片 (先前有在「Netflix 在單機服務 400Gbps 的影音流量」這篇提到),在 Netflix 的投影片裡面有提到他們在 Intel 平台上面受限於記憶體的頻寬,整台機器只能跑到 230Gbps。

另外一種猜測是,如果 Intel 與 Varnish 宣稱的 500Gbps 是算 switch 上的總流量 (有這樣算的嗎,你是 Juniper 嗎...),那這邊的 500Gbps 換算回去差不多就是減半 (還很客氣的沒把 cache 沒中需要去 origin server 拉資料的流量扣掉),跟 Netflix 在 FreeBSD 上跑出來的結果差不多啊...

坐等反駁 XDDD

AWS 流量相關的 Free Tier 增加不少...

Jeff Barr 出來公告增加 AWS 流量相關的 free tier:「AWS Free Tier Data Transfer Expansion – 100 GB From Regions and 1 TB From Amazon CloudFront Per Month」。

一般性的 data transfer 從 1GB/month/region 變成 100GB/mo,現在是 21 regions 所以不會有反例,另外大多數的人或是團隊也就固定用一兩個 region,這個 free tier 大概可以省個 $10 到 $20 左右?

Data Transfer from AWS Regions to the Internet is now free for up to 100 GB of data per month (up from 1 GB per region). This includes Amazon EC2, Amazon S3, Elastic Load Balancing, and so forth. The expansion does not apply to the AWS GovCloud or AWS China Regions.

另外是 CloudFront 的部份,本來只有前 12 個月有 free tier,現在是開放到所有帳號都有,另外從 50GB/month 升到 1TB/month,這個部份的 free tier 就不少了,大概是 $100 到 $200?

Data Transfer from Amazon CloudFront is now free for up to 1 TB of data per month (up from 50 GB), and is no longer limited to the first 12 months after signup. We are also raising the number of free HTTP and HTTPS requests from 2,000,000 to 10,000,000, and removing the 12 month limit on the 2,000,000 free CloudFront Function invocations per month. The expansion does not apply to data transfer from CloudFront PoPs in China.

今年十二月才生效,要注意一下不要現在就用爽爽:

This change is effective December 1, 2021 and takes effect with no effort on your part.

這樣好像可以考慮把 blog 與 wiki 都放上去玩玩看,目前這兩個服務都是用 Cloudflare 的 free tier,HiNet 用戶基本上都是連去 Cloudflare 的美西 PoP,偶而離峰時間會用亞洲的點,但都不會是台灣的 PoP...

不過記得之前 WordPress + CloudFront 有些狀況,再研究看看要怎麼弄好了...

Cloudflare Images

Cloudflare Images 開放付費使用了:「Cloudflare Images Now Available to Everyone」。

檔案傳到 Cloudflare 上面,然後另外收處理費用:

You pay $5/month for every 100,000 stored images and $1 per 100,000 delivered images. There are no additional resizing, compute or egress costs.

檔案大小的限制是 10MB,所以 $5/month 的 storage 最多可以提供 1TB 的空間,$0.005/GB 算是很漂亮的數字,如果是小圖的話就會比較虧一些?看起來丟大圖會開心一點...

Cloudflare Images offers multiple ways to upload your images. We accept all the common file formats including JPEG, GIF and WEBP. Each image uploaded to Images can be up to 10 MB.

然後支援的檔案格式有常見的 GIFJPEGPNG 以及 WebP

When a client requests an image, Cloudflare Images will pick the optimal format between WebP, PNG, JPEG and GIF.

另外有計畫要支援 AVIF

We’re just getting started with Cloudflare Images. Here are some of the features we plan to support soon:

AVIF support for even smaller file sizes and faster load times.

沒提到 durability,不知道會有多少...

Cloudflare 在巴西的佈點

看到 Cloudflare 在講他們打算在巴西佈 25 個點:「Expanding Cloudflare to 25+ Cities in Brazil」,目前可以看出來是八個點:

比較了領土的大小,跟美國扣掉阿拉斯加差不多等級:

人口的話美國是 328M 左右 (阿拉斯加不到 1M,不太影響感覺),巴西 215M,依照 Cloudflare 在美國目前有 39 個 PoP 來說,的確是可以拉上去,不過看起來應該是因為有大的 ISP 合作的關係:

Today, we are excited to announce an expansion we’ve been working on behind the scenes for the last two years: a 25+ city partnership with one of the largest ISPs in Brazil.

不過沒提到是哪個 ISP,之後看看有沒有消息...

CloudFront 宣佈支援 ECDSA 的 Certificate

Amazon CloudFront 宣佈支援 ECDSA 的 certificate:「Amazon CloudFront now supports ECDSA certificates for HTTPS connections to viewers」。

用主要是讓 certificate 更小,讓 HTTPS 建立時的過程更快 (包括了傳輸的速度與計算的速度):

As a result, conducting TLS handshakes with ECDSA certificates requires less networking and computing resources making them a good option for IoT devices that have limited storage and processing capabilities.

很久以前好像有看到資料說 256 bits 的 EC 運算量跟 768~1024 bits 的 RSA 差不多,但一時間找不到資料...

目前 CloudFront 只支援 NIST P-256 (secp256r1,或稱作 prime256v1):

Starting today, you can use Elliptic Curve Digital Signature Algorithm (ECDSA) P256 certificates to negotiate HTTPS connections between your viewers and Amazon CloudFront.

但 NIST P-256 一直為人詬病,在「SafeCurves: choosing safe curves for elliptic-curve cryptography」這邊可以看到 NIST 宣稱的效率設計實際上都不是真的:

Subsequent research (and to some extent previous research) showed that essentially all of these efficiency-related decisions were suboptimal, that many of them actively damaged efficiency, and that some of them were bad for security.

但目前標準是往 NIST P-256、NIST P-384 與 NIST P-521 靠攏 (主要是受到 CA/Browser Forum 的限制),要其他 curve 的 certificate 也沒辦法生,目前可能還是繼續觀望...

Fastly 服務掛掉的事件

Hacker News 上看到「Summary of June 8 outage (fastly.com)」這邊的討論,連結是 Fastly 官方先發布的說明:「Summary of June 8 outage」。

其中提到了是因為客戶的某些設定造成的:

10:27 Fastly Engineering identified the customer configuration

另外特別提到了 Fastly 的 WebAssembly 與 Compute@Edge,看起來應該就是這邊炸鍋:

Broadly, this means fully leveraging the isolation capabilities of WebAssembly and Compute@Edge to build greater resiliency from the ground up. We’ll continue to update our community as we make progress toward this goal.

我猜基本的資源保護機制應該有上 (像是在程式裡面自己 call 自己之類的,Fork bomb 之類的),可能是這兩個東西互串的某些部份沒有保護到,就一路把資源吃完炸掉。

看起來後續會有比較完整的報告,到時候再來看看會透露出多少東西...

QUIC 成為標準,從 RFC 8999 到 RFC 9002

前幾天的新聞了,這兩天 FastlyCloudflare 也都發文章出來了,QUIC 成為標準:「QUIC is now RFC 9000」、「QUIC Version 1 is live on Cloudflare」。

主要是這兩家都發稿宣傳他們的平台都支援 QUIC 了,接下來可以等一些測試報告,看看在 web 這種已經有不少複雜的 workaround 機制下,TCP BBR 環境的 HTTP/2 跟 QUIC 環境會有多少差異... 記得 QUIC 也是 BBR-based 的演算法。

在 QUIC 下的 https 協定會走 443/udp,如果防火牆是預設阻擋所有連線,然後逐條開放的話,需要另外開這組設定。

另外就是等 nginx 支援了,在「NGINX QUIC Preview」這邊有些資料,然後「">nginx-quic: log」裡面可以看到東西,裡面不少 commit 只是跟 nginx 本家同步而已,不過還是可以看到一些跟 QUIC 有關的...

CloudFront 的印度與亞太區降價

AWS 宣佈 CloudFront 在印度與亞太區降價:「Amazon CloudFront announces price cuts in India and Asia Pacific regions」,回朔至這個月月初生效:

Amazon CloudFront announces price cuts of up to 36% in India and up to 20% in the Asia Pacific region (Hong Kong, Indonesia, Philippines, Singapore, South Korea, Taiwan, & Thailand) for Regional Data Transfer Out to Internet rates. The new CloudFront prices in these regions are effective May 1st, 2021.

比了一下現在的「Amazon CloudFront Pricing」與 Internet Archive 上的「Amazon CloudFront Pricing」,看起來 First 10TB、Next 40TB、Next 100TB 與 Next 350TB 的部份都有降,更多的部份則是維持原價。

對一般簡單用的人來說,主要是落在 First 10TB 這個區間,亞太區的每 GB 單價從 USD$0.14 降到 USD$0.12,不無小補,而有夠大的量的單位應該都去談 commit & discount 了...

CloudFront 把本來的 Lambda@Edge 產品線拆細,推出 CloudFront Functions

Amazon CloudFront 本來的 Lambda@Edge 產品線拆細,多出一個 CloudFront Functions:「Introducing CloudFront Functions – Run Your Code at the Edge with Low Latency at Any Scale」。

就產品面的角度就是限制比 Lambda@Edge 多,但價錢變便宜很多。

先看價錢的部份,CloudFront Functions 的價錢只有 request:

Invocation pricing is $0.10 per 1 million invocations ($0.0000001 per request).

而 Lambda@Edge 則是兩筆費用,光是 request 費用就是六倍:

Request pricing is $0.60 per 1 million requests ($0.0000006 per request).

Duration is calculated from the time your code begins executing until it returns or otherwise terminates. You are charged $0.00005001 for every GB-second used.

當然,CloudFront Functions 便宜帶來的限制也不少,最主要的限制可以從最大執行時間只有 1ms,以及記憶體只能用 2MB 就可以看出來:

但這對於輕量的操作來說已經夠用了,主要就是對 HTTP header 的操作...

另外比較表上看到個有趣的點「JavaScript (ECMAScript 5.1 compliant)」,這樣應該就不會是 Node.js (V8 engine),而是其他的 JS engine?