Linode 東京第二機房的訪談 (a.k.a. PR 稿)

前幾天提到的「Linode 東京機房擴充的工作...」,這幾天 Linode 就開始發 PR 稿造勢了:「Behind The Scenes: Details About Upcoming Tokyo 2 DC Launch」,把重點拉出來。

新機房在品川 (Shinagawa),原文應該是打錯了 (n 跟 b 差一格而已):

Linode will be opening a new datacenter in Shibagawa ward, Tokyo, Japan, this fall and I was able to interview Linode’s datacenter operations manager, Brett Kaplan, who answered questions I asked regarding the upcoming Tokyo datacenter launch.

目標在 2016Q4 上線:

Soh: Can you say when this new datacenter is expected to come online?
Brett: We are hoping to launch later in Q4 this year.

用的是 Equinix 的機房:(然後這邊的 Shinagawa 是對的 XD)

Soh: So, where did you decide to establish the second Tokyo location?
Brett: We are utilizing an Equinix datacenter in Shinagawa ward Tokyo, Japan.

這樣看起來應該是「Tokyo TY2 Network Interconnection」這個...

ING Bank 在羅馬尼亞的機房出事...

ING Bank 在羅馬尼亞的機房發生資料損毀:「A Loud Sound Just Shut Down a Bank's Data Center for 10 Hours」。

不過原因是因為火災測試時噴發的音量太大,導致硬碟故障 XDDD

ING Bank’s main data center in Bucharest, Romania, was severely damaged over the weekend during a fire extinguishing test. In what is a very rare but known phenomenon, it was the loud sound of inert gas being released that destroyed dozens of hard drives. The site is currently offline and the bank relies solely on its backup data center, located within a couple of miles’ proximity.

報導給了個測試影片,示範超大的音量會對硬碟有什麼影響:

在核輻射避難所建的資料中心

Nuclear Fallout Shelter 照字面翻是核放射塵碉堡,意思上算是可以對抗輻射塵的防空洞,用 Google Translate 翻出來是「核輻射避難所」,感覺也頗貼切的啦...

而 C14 project 則是 Online.net 在巴黎的核輻射避難所建立 data center 的玩意:「C14 story - Part 1 Meet Our Nuclear Fallout Shelter

在地下 26 公尺,如果一層樓三米的話,大約是已經是地下八樓到九樓的位置了:

Starting in October 2016, you will be able to store all your critical C14 data in our fallout shelter, located 26 meters underground in Paris, France.

整個計畫在 2012 年從法國政府買下來,然後開始重建:

In 2011, the French state, owner of the building, decided to move the Ponts et Chaussées' central laboratory in the Parisian suburb and started to dismantle the building.

The Ponts et Chaussées' central laboratory buildings were revamped and divided in multiple bundles to be sold and transformed in multi-unit housing. The main building and the shelter were sold separately via a public invitation to tender. Online landed the deal in September 2012 with the project to build a Datacenter. The project’s codename is DC4.

接近完工的照片看起來好棒啊:

看起來這是一系列的故事,到時候應該會有不少照片可以看...

Google Cloud Platform 美西機房

Google Cloud 在七月的時候開放了美西機房:「Introducing Cloud Natural Language API, Speech API open beta and our West Coast region expansion」,而且東京機房也快開了:

And as we announced in March, Tokyo will be coming online later this year and we will announce more than 10 additional regions in 2017.

看到「Why we moved from Amazon Web Services to Google Cloud Platform?」的時候去找資料才發現的。這篇讓我重新算了一次成本,如果不計算 Bandwidth Cost 的話,GCE 整體的 f1-micro 記憶體 + 20GB 比 DigitalOcean 多 (不過 DO 給的是 SSD 就是了),而且還比較便宜啊...

不過如果把頻寬成本算進去,Internet Egress (i.e. Outbound bandwidth) 一定要走 Google Network,這點就比較傷了... 有美西機房後,看起來可以開始考慮用看看就是了 :o

CloudFlare 又增加一個亞洲的點:泰國曼谷

CloudFlare 又增加一個亞洲的 PoP 了,整個東南亞愈來愈密了:「Bangkok, Thailand: CloudFlare’s 79th Data Center」。

下一個亞洲的點會是越南嗎?另外美國的點一直都有點少 (相較於其他地區),不知道會不會加...

DigitalOcean 下一個資料中心建在印度

DigitalOcean 的下一個資料中心會建在印度:「Announcing the Home of our Next Datacenter: Bangalore, India」。

這幾年可以看到很多投入印度市場的消息,像是 Amazon CloudFront 在 2013 就增加了印度機房的 PoP,而且是一次增加兩個:「AWS CloudFront 與 Route53 增加印度機房...」,而去年 2015 年時又喊話要直接開印度區:「In the Works – AWS Region in India」。

Netflix 的全雲端化

Netflix 宣佈關閉最後一個非雲端的資料中心:「Completing the Netflix Cloud Migration」。

We are happy to report that in early January, 2016, after seven years of diligent effort, we have finally completed our cloud migration and shut down the last remaining data center bits used by our streaming service!

其實就只是選擇走全雲端的路子而已...

CloudFlare 在中東弄了四個機房...

CloudFlare 宣佈在中東啟用四個機房:「Now serving the Middle East: 4 new data centers, partnerships」。

據 comment 提到的,有些人本來是連到新加坡機房,現在在中東直接交換,速度上快了不少。

所以現在看起來是亞洲與非洲的彈幕不夠...?

超慢的 Facebook...

今天的 Facebook 慢到爆炸,看起來像是台灣機房的問題,這邊先提供 workaround:

把 DNS 改用 8.8.8.8,看起來 DNS 會解去洛杉磯的機房,速度就會正常多了。

接下來列出看到的結果,靜態的頁面 (拿 /robots.txt 測試) 沒問題:

Server Software:        
Server Hostname:        www.facebook.com
Server Port:            443
SSL/TLS Protocol:       TLSv1.2,ECDHE-ECDSA-AES128-GCM-SHA256,256,128

Document Path:          /robots.txt
Document Length:        4159 bytes

Concurrency Level:      1
Time taken for tests:   0.792 seconds
Complete requests:      20
Failed requests:        0
Total transferred:      99000 bytes
HTML transferred:       83180 bytes
Requests per second:    25.24 [#/sec] (mean)
Time per request:       39.612 [ms] (mean)
Time per request:       39.612 [ms] (mean, across all concurrent requests)
Transfer rate:          122.03 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       28   30   0.6     30      31
Processing:     9   10   0.3     10      10
Waiting:        9    9   0.2      9      10
Total:         38   39   0.7     39      41

Percentage of the requests served within a certain time (ms)
  50%     39
  66%     40
  75%     40
  80%     40
  90%     40
  95%     41
  98%     41
  99%     41
 100%     41 (longest request)

但動態的頁面就慢到炸:

Server Software:        
Server Hostname:        www.facebook.com
Server Port:            443
SSL/TLS Protocol:       TLSv1.2,ECDHE-ECDSA-AES128-GCM-SHA256,256,128

Document Path:          /
Document Length:        0 bytes

Concurrency Level:      1
Time taken for tests:   20.294 seconds
Complete requests:      20
Failed requests:        0
Non-2xx responses:      20
Total transferred:      6700 bytes
HTML transferred:       0 bytes
Requests per second:    0.99 [#/sec] (mean)
Time per request:       1014.690 [ms] (mean)
Time per request:       1014.690 [ms] (mean, across all concurrent requests)
Transfer rate:          0.32 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       28   30   0.9     29      32
Processing:   151  985 1144.5    567    4939
Waiting:      151  985 1144.5    567    4938
Total:        180 1014 1145.1    597    4971
WARNING: The median and mean for the initial connection time are not within a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%    597
  66%   1226
  75%   1279
  80%   1667
  90%   2267
  95%   4971
  98%   4971
  99%   4971
 100%   4971 (longest request)

幾個新發現:IPv6 與 Facebook 台灣機房...

無意間測試時發現的...

Ubuntu 14.04 的 PPPoE 撥上 HiNet 後,會拿到 IPv6 address (我記得申請完後之前一直拿不到),然後一次拿好幾個 (不知道什麼原因,應該要去翻翻看 IPv6 是不是有什麼特性):

ppp0      Link encap:Point-to-Point Protocol  
          inet addr:1.163.x.x  P-t-P:168.95.x.x  Mask:255.255.255.255
          inet6 addr: 2001:b011:3008:282:xxxx:xxxx:xxxx:xxxx/64 Scope:Global
          inet6 addr: 2001:b011:3008:282:xxxx:xxxx:xxxx:xxxx/64 Scope:Global
          inet6 addr: 2001:b011:3008:282:xxxx:xxxx:xxxx:xxxx/64 Scope:Global
          inet6 addr: 2001:b011:3008:282:xxxx:xxxx:xxxx:xxxx/64 Scope:Global
          inet6 addr: fe80::xxxx:xxxx:xxxx:xxxx/10 Scope:Link
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1492  Metric:1
          RX packets:24632377 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16553423 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:3 
          RX bytes:30408127665 (30.4 GB)  TX bytes:2344774062 (2.3 GB)

然後到處亂測發現 Facebook 在台灣有機房:

gslin@GSLIN-HOME1404 [~] [00:32/W3] mtr --report www.facebook.com
Start: Thu Mar 19 00:32:25 2015
HOST: GSLIN-HOME1404              Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- ipv6.dynamic.hinet.net     0.0%    10    6.2   6.5   6.2   7.3   0.0
  2.|-- 2001:b000:82:5:22:2201:1:  0.0%    10    6.1   9.0   6.1  26.4   6.3
  3.|-- 2001:b000:82:4:3201:3302:  0.0%    10    6.7   6.6   6.3   6.8   0.0
  4.|-- 2001:b000:80:3:80:82:3:2   0.0%    10   11.6  10.7   6.9  16.6   2.8
  5.|-- 2001:b000:80:4:3011:3311:  0.0%    10    6.9   7.3   6.4  10.0   1.1
  6.|-- 2001:b000:80:7:0:3:2934:1  0.0%    10   16.8   8.3   6.9  16.8   3.0
  7.|-- po126.msw01.01.tpe1.tfbnw  0.0%    10    7.9   8.0   7.5   8.9   0.0
  8.|-- edge-star6-shv-01-tpe1.fa  0.0%    10    7.3   7.2   6.8   7.6   0.0

再回頭測了 IPv4:

gslin@GSLIN-HOME1404 [~] [00:32/W3] mtr --report -4 www.facebook.com
Start: Thu Mar 19 00:33:18 2015
HOST: GSLIN-HOME1404              Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- h254.s98.ts.hinet.net      0.0%    10    6.9   6.4   5.8   6.9   0.0
  2.|-- SNUH-3301.hinet.net        0.0%    10    6.3  12.2   6.2  60.9  17.1
  3.|-- SNUH-3201.hinet.net        0.0%    10    6.2   6.6   6.2   6.8   0.0
  4.|-- TPDT-3011.hinet.net        0.0%    10    7.8   8.8   7.6  10.5   0.7
  5.|-- tpdb-3311.hinet.net        0.0%    10    6.4   6.7   6.3   7.8   0.3
  6.|-- 203-75-228-33.HINET-IP.hi  0.0%    10    7.4   7.3   7.0   7.6   0.0
  7.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
  8.|-- edge-star-shv-02-tpe1.fac  0.0%    10    7.3   7.3   6.8   7.7   0.0

而且 Facebook 上的圖片會導到 scontent-tpe.xx.fbcdn.net,這樣產生的量應該不小?而用 Googlescontent-tpe.xx.fbcdn.net,可以看到大約是在 2015/02/14 上線的。

透過幾個不同的 ISP 看了一下 routing,應該是跟國內幾個 ISP 有 peering,沒有的就走 TPIX 交換。

不過學術網路 (TANet) 得繞到香港 HKIX 再回來,這就有點虧了,不曉得 Facebook 對學網是不是吐其他的 endpoint 出去。(有租用國際線路 transit 的學校應該會走租用的國際線路,通常是 TWGate 就交換到 TPIX,不會這樣繞...)