We do eviction based on an algorithm called “least recently used” or LRU. This means that the least-requested content can be evicted from cache first to make space for more popular content when storage space is full.
The Cache Reserve Plan will mimic the low cost of R2. Storage will be $0.015 per GB per month and operations will be $0.36 per million reads, and $4.50 per million writes.
另外還有還沒公告的 Cache Reserve 的部份:
(Cache Reserve pricing page will be out soon)
對於很極致想要拼 hit rate 的使用者來說是個選擇就是了,另外可以想到直播相關的協定 (像是 HLS) 好像可以這樣搞來壓低對 origin server 的壓力?
Yes, the wordings are carefully formulated as they have to be signed off by the AWS legal team for obvious reasons. With that said, this update was driven by profiling real applications and addressing the most common operations, so the benefits are real. For example, a simple WordPress "hello world" is now about 2x as fast as before.
另外這次的效能提昇是透過 cache 層達成的:
I'm the PMT for this project in the EFS team. The "flip the switch" part was indeed one of the harder parts to get right. Happy to share some limited details. The performance improvement builds on a distributed consistent cache. You can enable such a cache in multiple steps. First you deploy the software across the entire stack that supports the caching protocol but it's disabled by configuration. Then you turn it for the multiple components that are involved in the right order. Another thing that was hard to get right was to ensure that there are no performance regressions due to the consistency protocol.
然後在每個 AZ 都有 cache:
The caches are local to each AZ so you get the low latency in each AZ, the other details are different. Unfortunately I can't share additional details at this moment, but we are looking to do a technical update on EFS at some point soon, maybe at a similar venue!
另外看起來主要就是 metadata cache 的幫助:
NFS workloads are typically metadata heavy and highly correlated in time, so you can achieve very high hit rates. I can't share any specific numbers unfortunately.
單 CPU 的伺服器是四個 100Gbps 界面接出來,雙 CPU 的伺服器是八個 (這邊 SUT 是 system under test 的縮寫):
These client systems were connected to the CDN servers using 100 GbE links through a switch; 4x100 GbE connections for the single-processor SUT, and 8x100 GbE for the dualprocessor SUT. Testing was done using Wrk, a widely recognized open-source HTTP(S) benchmarking tool.
在白皮書最後面也有提到測試的配置,都是在 Ubuntu 20.04 上面跑,單 CPU 用的是兩張 Intel 的 100Gbps 網卡,雙 CPU 的用的是四張 Mellanox 的 100Gbps 網卡:
3rd generation Intel Xeon Scalable testing done by Intel in September 2021. Single processor SUT configuration was based on the Supermicro SMC 110P-WTR-TNR single socket server based on Intel® Xeon® Platinum 8380 processor (microcode: 0xd000280) with 40 cores operating at 2.3 GHz. The server featured 256 GB of RAM. Intel® Hyper-Threading Technology was enabled, as was Intel® Turbo Boost Technology 2.0. Platform controller hub was the Intel C620. NUMA balancing was enabled. BIOS version was 1.1. Network connectivity was provided by two 100 GbE Intel® Ethernet Network Adapters E810. 1.2 TB of boot storage was available via an Intel SSD. Application storage totaled 3.84TB per drive and was provided by 8 Intel P5510 SSDs. The operating system was Ubuntu Linux release 20.04 LTS with kernel 5.4.0-80 generic. Compiler GCC was version 9.3.0. The workload was wrk/master (April 17, 2019), and the version of Varnish was varnishplus-6.0.8r3. Openssl v1.1.1h was also used. All traffic from clients to SUT was encrypted via TLS.
3rd generation Intel Xeon Scalable testing done by Intel in September 2021. Dual processor SUT configuration was based on the Supermicro SMC 22OU-TNR dual socket server based on Intel® Xeon® Platinum 8380 processor (microcode: 0xd000280) with 40 cores operating at 2.3 GHz. The server featured 256 GB of RAM. Intel® Hyper-Threading Technology was enabled, as was Intel® Turbo Boost Technology 2.0. Platform controller hub was the Intel C620. NUMA balancing was enabled. BIOS version was 1.1. Network connectivity was provided by four 100 GbE Mellanox MCX516A-CDAT adapters. 1.2 TB of boot storage was available via an Intel SSD. Application storage totaled 3.84TB per drive and was provided by 12 Intel P5510 SSDs. The operating system was Ubuntu Linux release 20.04 LTS with kernel 5.4.0-80- generic. Compiler GCC was version 9.3.0. The workload was wrk/master (April 17, 2019), and the version of Varnish was varnish-plus6.0.8r3. Openssl v1.1.1h was also used. All traffic from clients to SUT was encrypted via TLS.
Per-object metadata is stored within a discrete S3 subsystem. This system is on the data path for GET, PUT, and DELETE requests, and is responsible for handling LIST and HEAD requests. At the core of this system is a persistence tier that stores metadata. Our persistence tier uses a caching technology that is designed to be highly resilient. S3 requests should still succeed even if infrastructure supporting the cache becomes impaired. This meant that, on rare occasions, writes might flow through one part of cache infrastructure while reads end up querying another. This was the primary source of S3’s eventual consistency.
如果要解決 Eventually Consistent,最直接的想法是拔掉 cache,但這樣對效能的影響太大,所以得在要保留 cache 的情況下設計,所以就想到用其他管道確保 cache 裡的資料狀態是正確的:
One early consideration for delivering strong consistency was to bypass our caching infrastructure and send requests directly to the persistence layer. But this wouldn’t meet our bar for no tradeoffs on performance. We needed to keep the cache. To keep values properly synchronized across cores, CPUs implement cache coherence protocols. And that’s what we needed here: a cache coherence protocol for our metadata caches that allowed strong consistency for all requests.
We had introduced new replication logic into our persistence tier that acts as a building block for our at-least-once event notification delivery system and our Replication Time Control feature. This new replication logic allows us to reason about the “order of operations” per-object in S3. This is the core piece of our cache coherency protocol.
These verification techniques were a lot of work. They were more work, in fact, than the actual implementation itself. But we put this rigor into the design and implementation of S3’s strong consistency because that is what our customers need.
攻擊手法是先發一個 DNS query 到 DNS resolver,然後馬上再送出一個偽造的 DNS response packet 給 DNS resolver 收,運氣好的話這個偽造的結果就會被 cache 起來。
記得當年的 168.95.1.1 與 168.95.192.1 沒有太直接受到影響 (相較於其他的 DNS resolver),是因為這兩個 DNS resolver 後面有 server cluster 會打散流量,所以未必能猜對去查詢時用的 DNS server 所使用的 IP address,有點類似下面提到的緩解方案 (只是沒那麼有效)。
而記得後來的緩解方式是透過亂數化 source port (讓 DNS resolver 查詢時不要從 port 53 出去問),這個方式讓攻擊機率大幅下降 (大約降到 的機率)。
後來 DNS 加上 nonce 機制再繼續壓低攻擊成功的機率 (再降一次 ,變成大約 ),最後則是 DNSSEC 的支援度逐漸普及,才解決掉這個問題。