Home » Posts tagged "cloudflare" (Page 3)

CloudFlare 把 HTTPS Everywhere 的清單拿到 CDN 上用所推出的產品...

這個產品就如同標題所說的方式而已,做起來不難,只是一直沒人做就是了:「How we brought HTTPS Everywhere to the cloud (part 1)」。

傳統的作法是直接硬幹下去換掉,或是用 header 讓瀏覽器主動轉過去:

A naive way to do this would be to just rewrite http:// links to https:// or let browsers do that with Upgrade-Insecure-Requests directive.


  • Each single HTTP sub-resource is also available via HTTPS.
  • It's available at the exact same domain and path after protocol upgrade (more often than you might think that's not the case).

而 HTTPS Everywhere 則是用人力確認了哪些網站可以這樣玩。CloudFlare 利用這份清單改寫程式碼裡面的 HTTP 連結,僅可能將 HTTP 資源換成 HTTPS。算是還不錯的方式...

之後有可能再推出對 HTTP images 與 HTTP assets 的 proxy cache?

CloudFlare 對 HiNet 成本的抱怨 (還有其他 ISP...)

CloudFlare 特地寫了一篇討拍文在分析對六個 ISP 的超高成本:「Bandwidth Costs Around the World」。


As a benchmark, let's assume the cost of transit in Europe and North America is 10 units (per Mbps).

然後來看亞洲區以及 HiNet 的部份寫了什麼:

Two Asian locations stand out as being especially expensive: Seoul and Taipei. In these markets, with powerful incumbents (Korea Telecom and HiNet), transit costs 15x as much as in Europe or North America, or 150 units.

成本大約是 15 倍。另外說明這六個 ISP 佔了他們 50% 的頻寬成本 (以及 6% 的流量):

Today, however, there are six expensive networks (HiNet, Korea Telecom, Optus, Telecom Argentina, Telefonica, Telstra) that are more than an order of magnitude more expensive than other bandwidth providers around the globe and refuse to discuss local peering relationships. To give you a sense, these six networks represent less than 6% of the traffic but nearly 50% of our bandwidth costs.

為什麼有種濃濃的既視感 XDDD

Amazon CloudFront 加拿大機房 (然後順便提一下 CloudFlare...)

Amazon CloudFront 增加了加拿大的點:「Amazon CloudFront Expands to Canada」,一次開了兩個機房:

I am happy to announce that we are adding CloudFront edge locations in Toronto and Montreal in order to better serve our users in the region, bringing the global count up to 59 (full list).


總算記得加拿大這個國家了 XDDD

另外一個是昨天發現 CloudFlare台灣固網 (以及台灣大哥大用戶) 已經直連了,看起來是六月底的時候就通了,只是一直沒注意到:

這樣台灣主要的 ISP 對 CloudFlare 都有直連了...

利用 CloudFlare 的 reCAPTCHA 反向找出真正的 Tor 使用者

Cryptome 這邊看到可能可以被拿來用的技巧:「Cloudflare reCAPTCHA De-anonymizes Tor Users」。

Tor 使用者連到 CloudFlare 上時,常常會出現 reCAPTCHA 的提示,要求你驗證,而這個驗證過程的 traffic pattern 太龐大而且很明顯,當情治單位同時可以監控 CloudFlare 的上游 (像是「Airtel is sniffing and censoring CloudFlare’s traffic in India and CloudFlare doesn’t even know it.」這篇提到的問題) 或是監控 Tor 的 exit node 的上游,再加上同時監測使用者可能的 ISP,就可以對照湊出使用者:

Each click on one of the images in the puzzle generates a total of about 50 packets between Tor user's computer and the Cloudflare's server (about half are requests and half are real-time responses from the server.) All this happens in less than a second, so eventual jitter introduced in onion mixing is immaterial. The packet group has predictable sizes and patterns, so all the adversary has to do is note the easily detectable signature of the "image click" event, and correlate it with the same on the Cloudflare side. Again, no decryption required.

短短的一秒鐘內會產生 50 個封包,而且 pattern 很清楚...

CloudFlare 提供的 SPDY + HTTP/2 patch 可以在 nginx 1.11 上面跑了...

SPDY 對行動裝置還是很重要,尤其是 Android 裝置在 5.0 之後才有支援 HTTP/2,但在 3.0 之後就有支援 SPDY,也就是超過一半的 Android 用戶只有 SPDY 但沒有 HTTP/2 能力:

查資料的時候發現當初 CloudFlare 提供的 SPDY + HTTP/2 patch 本來只能對 nginx 1.9.7 更新 (參考之前寫的「CloudFlare 放出可以同時支援 HTTP/2 與 SPDY 的 nginx patch」),後來有人再更新到 1.9.15,而在留言的地方也可以看到有人回報在 1.11.0 上面可以跑:「Update NGINX SPDY patch for 1.9.15」:

Works fine against nginx 1.11.0 and OpenSSL 1.0.2h (with ChaCha20 patch). I am building nginx as an Ubuntu 16.04 package.

而最新版 1.11.1 只有一個安全性更新,應該也可以跑... 來找看看會不會有人包 ppa 出來。

CloudFlare 的莫斯科機房啟用

如同之前 CloudFlare 預告的,在莫斯科機房啟用了:「Moscow, Russia: CloudFlare’s 83rd data center」。


Previously, delivery of nearly four million Internet applications on CloudFlare to over 100 million Internet users in Russia occurred mostly from our Stockholm and Frankfurt data centers. By now localizing that delivery, we are helping shave latency down by over 20ms for a majority of Russian users.

這樣整個歐洲的據點算完整了... 接下來應該是非洲與看對北美機房了。

CloudFlare 總算記得要加美國的點了...

CloudFlare 一直加海外的點,在東南亞加到平均 latency 比美國低... 這有點不太像話 XD 總算是記得要加強美國的 latency 了:「Hello, Colorado! CloudFlare's 82nd Data Center is Live in Denver」。

這次加在丹佛剛好是目前美國的點裡面最弱的一環:先前離 CloudFlare 最近的機房應該是 Phoenix 或是 Dallas,都大約在 600km 左右,latency,而網路也不會直線連接,所以這次加上去後的改善很明顯:


Denver joins CloudFlare's existing United States data centers in Ashburn, Chicago, Dallas, Los Angeles, Miami, Minneapolis, Phoenix, San Jose, and Seattle. In the continued pursuit of faster performance, we have data centers at another ten North American cities in the works.


Our week of expansion isn't done. Tomorrow, we head to one of the world's largest cities.

CloudFlare 放出可以同時支援 HTTP/2 與 SPDY 的 nginx patch

CloudFlare 放出可以同時支援 HTTP/2SPDYnginx patch:「Open sourcing our NGINX HTTP/2 + SPDY code」。

不過 patch 的版本有點舊:

We've extracted our changes and they are available as a patch here. This patch should build cleanly against NGINX 1.9.7.

如同原文下面 comment 提到的問題,nginx 1.9.7 太舊了 (2015/11/17 放出的版本),到現在有出了不少安全性更新,以及對 HTTP/2 的 bugfix。應該會需要再等官方把新版的 patch 拿出來改之後才能用。

CloudFlare 的 Origin CA:保護 CloudFlare 到 Origin 這段的傳輸過程

CloudFlare 推出的 Origin CA 用來保護從 CloudFlare 到 Origin Server 這段的過程:「Introducing CloudFlare Origin CA」,也就是右半部這段:

CloudFlare 把這個新功能包裝得很神,但實際上只是弄個 CA 出來跑而已,僅此而已。

當然,由於他不需要處理 Public CA 的問題,所以有很多在一般 TLS 連線需要做的檢查步驟可以被簡化,藉此達到效能改善,包括了省掉 intermediate certificates、OCSP 以及 SCTs:

With Origin CA certificates, we’ve stripped everything that’s extraneous to communication between our servers and yours to produce the smallest possible certificate and handshake. For example, we have no need to bundle intermediate certificates to assist browsers in building paths to trusted roots; no need to include signed certificate timestamps (SCTs) for purposes of certificate transparency and EV treatment; no need to include links to Certification Practice Statements or other URLs; and no need to listen to Online Certificate Status Protocol (OCSP) responses from third-parties.


Eliminating these unnecessary components typically found in certificates from public CAs has allowed us to reduce the size of the handshake by up to 70%, from over 4500 bytes in some cases to under 1300.