Home » Computer » Network » Archive by category "CDN" (Page 2)

Cloudflare 推出 Spectrum:65535 個 TCP Port 都可以轉的 Proxy...

Cloudflare 推出了 Spectrum,文章標題提到的 65533 應該是指 80 & 443 以外其他的 port:「Introducing Spectrum: Extending Cloudflare To 65,533 More Ports」。

然後因為 TCP proxy 不像 HTTP proxy 與 WebSocket proxy 可以靠 Host header 資訊判斷,在 TCP proxy 需要獨占 IP address 使用 (i.e. 一個 IP address 只能給一個客戶用),而因為 IPv4 address 不夠的關係,這個功能只開放給 Enterprise 客戶用:

Today we are introducing Spectrum, which brings Cloudflare’s security and acceleration to the whole spectrum of TCP ports and protocols for our Enterprise customers.

雖然現在限定在 Enterprise 客戶,但 Cloudflare 還是希望看看有沒有其他想法,目前提出來的選項包括了開放 IPv6 address 給所有人用,或是變成獨立付費項目:

Why just Enterprise? While HTTP can use the Host header to identify services, TCP relies on each service having a unique IP address in order to identify it. Since IPv4 addresses are endangered, it’s quite expensive for us to delegate an IP per application and we needed to limit use. We’re actively thinking about ways to bring Spectrum to everyone. One idea is to offer IPv6-only Spectrum to non-Enterprise customers. Another idea is let anyone use Spectrum but pay for the IPv4 address. We’re not sure yet, but if you prefer one to the other, feel free to comment and let us know.

類似的產品應該是 clean pipe 類的服務,但一般 clean pipe 是透過 routing 重導清洗流量,而非像 Cloudflare 這樣設計... 不知道後續會有什麼樣的變化。

HTTPS Everywhere 改變更新 Ruleset 機制,變成定時更新...

HTTPS Everywhere 是我很喜歡的一個套件,裡面有 Ruleset,會將 Ruleset 表內認定有支援 HTTPS 網站的 HTTP request 都改成 HTTPS,這可以降低被攔截的風險。像是網站雖然有 HSTS 但第一次連線時走 HTTP 的情況,以及網站本身有支援 HTTPS 但沒有設定 HSTS 時,在網址列上誤打 HTTP 版本的情況。

先前版本的 Ruleset 是隨著軟體更新時,包在軟體內一起更新。這樣的缺點是更新速度比較慢,但好處是不需要伺服器端,而且隱私性也比較高。而現在 EFF 決定還是要推出線上更新的版本,以加速 Ruleset 更新的速度:「HTTPS Everywhere Introduces New Feature: Continual Ruleset Updates」。

We've modified the extension to periodically check in with EFF to see if a new list is available.

而頻寬的部份由 Fastly 贊助:

If you haven't already, please install and contribute to HTTPS Everywhere, and consider donating to EFF to support our work!

如果對這點有疑慮的,也還是可以關掉 auto updater 避免洩漏資訊給 EFF 或是 Fastly。

Cloudflare 推出 Argo Tunnel

Cloudflare 推出了 Argo Tunnel,可以將內部網路與 Cloudflare 之間打通:「Argo Tunnel: A Private Link to the Public Internet」。

Cloudflare 在去年推出了 Wrap (可以參考「Cloudflare 推出的 Wrap 讓你不用在本地端開對外的 Port 80/443」這篇),這次其實只是改名:

During the beta period, Argo Tunnel went under a different name: Warp. While we liked Warp as a name, as soon as we realized that it made sense to bundle Warp with Argo, we wanted it to be under the Argo product name. Plus, a tunnel is what the product is so it's more descriptive.

看起來沒有什麼新的玩意... 純粹改名字 :o

Cloudflare 推出在 HTTPS 下的壓縮機制

在 TLS (HTTPS) 環境下基本上都不能開壓縮,主要是為了避免 secret token 會因為 dictionary 的可預測性而被取出,像是 CRIMEBREACHTIMEHEIST (沒完結過...),而因為全面關閉壓縮,對於效能的影響很大。

Cloudflare 就試著去找方法,是否可以維持壓縮,但又不會洩漏 secret token 的方式,於是就有了這篇:「A Solution to Compression Oracles on the Web」。

重點在於 Our Solution 這段的開頭:

We decided to use selective compression, compressing only non-secret parts of a page, in order to stop the extraction of secret information from a page.

透過 regex 判斷那些東西屬於 secret token,然後對這些資料例外處理不要壓縮,而其他的部份就可以維持壓縮。這樣傳輸量仍然可以大幅下降,但不透漏 secret token。然後因為這個想法其實很特別,沒有被實證過,所以成立了 Challenge Site 讓大家打:

We have set up the challenge website compression.website with protection, and a clone of the site compression.website/unsafe without it. The page is a simple form with a per-client CSRF designed to emulate common CSRF protection. Using the example attack presented with the library we have shown that we are able to extract the CSRF from the size of request responses in the unprotected variant but we have not been able to extract it on the protected site. We welcome attempts to extract the CSRF without access to the unencrypted response.

這個方向如果可行的話,應該會有人發展一些標準讓 compression algorithm 不用猜哪些是 secret token,這樣一來就更能確保因為漏判而造成的 leaking...

Cloudflare 推出 1.1.1.1 的 DNS Resolver 服務

Cloudflare 推出了 1.1.1.1 上的 DNS Resolver 服務:「Announcing 1.1.1.1: the fastest, privacy-first consumer DNS service」,主打項目是隱私以及效能。

然後因為這個 IP 的特殊性,上面有不少奇怪的流量... 而 Cloudflare 跟 APNIC 交換條件後取得這個 IP address 的使用權 (然後 anycast 發出去):

APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.

We talked to the APNIC team about how we wanted to create a privacy-first, extremely fast DNS system. They thought it was a laudable goal. We offered Cloudflare's network to receive and study the garbage traffic in exchange for being able to offer a DNS resolver on the memorable IPs. And, with that, 1.1.1.1 was born.

Cloudflare 做了效能比較表 (與 Google Public DNSOpenDNS 比較),可以看到平均速度快不少:

在台灣的話,HiNet 非固定制 (也就是 PPPoE 連線的使用者) 連到 8.8.8.8 有奇怪的 latency:

可以比較同一台機器對 168.95.1.1 的反應速度:

不過如果你是 HiNet 固定制 (固 2 或是固 6 IP 那種,不透過 PPPoE,直接設定 IP address 使用 bridge mode 連線的使用者),兩者的 latency 就差不多,不知道是 Google 還是 HiNet 的架構造成的。

另外比較奇怪的一點是在文章最後面提到的 https://1.1.1.1/,理論上不會發 IP-based 的 SSL certificate 才對?不知道 CEO 老大是有什麼誤解... XD

Visit https://1.1.1.1/ from any device to get started with the Internet's fastest, privacy-first DNS service.

Update:查了資料發現是可以發的,只是大多數的 CA 沒有提供而已...

Facebook 在南韓因為太慢被罰錢???

看到「South Korea fines Facebook $369K for slowing user internet connections」這則新聞,裡面提到 Facebook 的 reroute 行為:

The Korea Communications Commission (KCC) began investigating Facebook last May and found that the company had illegally limited user access, as reported by ABC News. Local South Korean laws prohibit internet services from rerouting users’ connections to networks in Hong Kong and US instead of local ISPs without notifying those users. In a few cases, such rerouting slowed down users’ connections by as much as 4.5 times.

沒有告知使用者就導去香港或是美國的伺服器,聽起來像是 GeoDNS 的架構,以及 Facebook 的 CDN 架構幹的事情?不過在原報導裡面,另外一個指控是:

The KCC probed claims that Facebook intentionally slowed access while it negotiated network usage fees with internet service providers.

另外南韓官方也不承認使用者條款內的告知有效的:

Facebook said it did not violate the law in part because its terms of use say it cannot guarantee its services will operate without delays or interference. KCC officials rejected that argument, saying the terms were unfair. It recommended the company amend its terms of use.

現在看起來應該是要打官司?

Cloudflare 用 ARM 當伺服器的進展...

Twitter 上看到 Matthew Prince (Cloudflare 的創辦人與現任 CEO) 提到了目前的進展,貼出一張兩者用電量的差距 (235W 與 150W):

兩者差了 85W,如果以五年來算就差了 3723 度的電,另外再考慮 PUE 與機櫃空間租用的成本,長期應該是頗有機會換掉原來的 x86 系統。反過來看,短期有轉換測試成本以及 (可能會有的) 較高的故障率 (畢竟是白老鼠 XD),再來是機器本身價錢差距,這些都是會想要知道的...

在 tweet 後 Matthew Prince 有回答一些問題,另外可以看到後續會有更多細節會整理出來,但感覺應該是調整的差不多決定會換過去了?這邊算是延續去年十一月「Cloudflare 測試 ARM 新的伺服器」這篇所做的事情,當時他們拿到 ARM 的工程板在測試,就已經跟 Xeon 打的差不多 (有輸有贏),現在應該又改善更多...

看 retweet 數可以看出來大家還滿期待的,畢竟 ARM 上面的 Linux 本來就因為行動裝置很熱,現在主要還是差在有沒有穩定的伺服器可以用。

Cloudflare Workers 開放使用

Cloudflare 宣佈 Cloudflare Workers 開放使用了:「Everyone can now run JavaScript on Cloudflare with Workers」。先前的消息可以參考「Cloudflare Worker 進入 Open Beta 讓大家玩了...」與「Cloudflare 也能在各端點跑 JavaScript 了」。

價錢還直接做一張圖出來,每一百萬次 request 收費 USD$0.5,然後低消是 USD$5/month (也就是一千萬次 request):

相當於是多了一些選擇,擋在前面做些簡單的事情應該還不錯...

GitHub 在 2/28 遭受的攻擊...

GitHub 在 2/28 遭受 DDoS 攻擊,蠻快就把事故報告丟出來了:「February 28th DDoS Incident Report」。

不過跟 GitHub 其他文章不太一樣,這篇算是 PR 稿吧,簡單來說就是花錢買 Akamai Prolexic 的過濾服務解決... Akamai 方的 PR 稿則是在「Memcached-fueled 1.3 Tbps attacks - The Akamai Blog」這邊可以看到。

17:21 UTC 發現問題,然後判斷超過 100Gbps,所以 17:26 決定讓 Akamai Prolexic 接管過濾:

At 17:21 UTC our network monitoring system detected an anomaly in the ratio of ingress to egress traffic and notified the on-call engineer and others in our chat system. This graph shows inbound versus outbound throughput over transit links:

Given the increase in inbound transit bandwidth to over 100Gbps in one of our facilities, the decision was made to move traffic to Akamai, who could help provide additional edge network capacity. At 17:26 UTC the command was initiated via our ChatOps tooling to withdraw BGP announcements over transit providers and announce AS36459 exclusively over our links to Akamai. Routes reconverged in the next few minutes and access control lists mitigated the attack at their border. Monitoring of transit bandwidth levels and load balancer response codes indicated a full recovery at 17:30 UTC. At 17:34 UTC routes to internet exchanges were withdrawn as a follow-up to shift an additional 40Gbps away from our edge.

就這樣而已,完全就是 PR 稿 XDDD

Archives