Google Cloud Platform 在台灣的機房可以開 Standard Network 的機器了

Google Cloud Platform 一開始是提供 Premium Network,會透過 Google 自家的網路骨幹連到最近的點,然後再透過當地的機房交換出去,這樣可以確保頻寬的穩定性,但成本當然也就比較高...

後來提供了 Standard Network 則是從機房出去後就直接交換,成本會比較低 (參考「Network Service Tiers - Custom Cloud Network」這篇),但在台灣的機房一直都沒有提供 Standard Network (好像是需要另外申請?),所以我每個月月底的時候都會測一下看看開放了沒... 然後剛剛發現可以開起來了,不確定是已經全開了還是分批開。

測了一下發現網路相當... 爛?是還在調整嗎...

像是 1.1.1.1 的 latency 很高 (自家的 8.8.8.8 當然就沒這個問題):

PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=49 time=48.9 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=49 time=48.5 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=49 time=48.5 ms

--- 1.1.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 48.554/48.691/48.964/0.193 ms

然後 168.95.1.1139.175.1.1 也都很差 (61.31.1.1 不給 ping):

PING 168.95.1.1 (168.95.1.1) 56(84) bytes of data.
64 bytes from 168.95.1.1: icmp_seq=1 ttl=239 time=21.2 ms
64 bytes from 168.95.1.1: icmp_seq=2 ttl=239 time=21.3 ms
64 bytes from 168.95.1.1: icmp_seq=3 ttl=239 time=21.4 ms

--- 168.95.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 21.276/21.367/21.454/0.072 ms
PING 139.175.1.1 (139.175.1.1) 56(84) bytes of data.
64 bytes from 139.175.1.1: icmp_seq=1 ttl=53 time=63.4 ms
64 bytes from 139.175.1.1: icmp_seq=2 ttl=53 time=62.9 ms
64 bytes from 139.175.1.1: icmp_seq=3 ttl=53 time=62.9 ms

--- 139.175.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 62.967/63.139/63.455/0.303 ms

不過學術網路倒是還不錯:

PING 140.112.2.2 (140.112.2.2) 56(84) bytes of data.
64 bytes from 140.112.2.2: icmp_seq=1 ttl=51 time=5.13 ms
64 bytes from 140.112.2.2: icmp_seq=2 ttl=51 time=4.40 ms
64 bytes from 140.112.2.2: icmp_seq=3 ttl=51 time=4.52 ms

--- 140.112.2.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 4.405/4.690/5.138/0.325 ms
PING 140.113.250.135 (140.113.250.135) 56(84) bytes of data.
64 bytes from 140.113.250.135: icmp_seq=1 ttl=55 time=5.87 ms
64 bytes from 140.113.250.135: icmp_seq=2 ttl=55 time=5.97 ms
64 bytes from 140.113.250.135: icmp_seq=3 ttl=55 time=6.11 ms

--- 140.113.250.135 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 5.872/5.987/6.119/0.135 ms
PING 140.117.11.1 (140.117.11.1) 56(84) bytes of data.
64 bytes from 140.117.11.1: icmp_seq=1 ttl=242 time=9.52 ms
64 bytes from 140.117.11.1: icmp_seq=2 ttl=242 time=9.17 ms
64 bytes from 140.117.11.1: icmp_seq=3 ttl=242 time=9.20 ms

--- 140.117.11.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 9.172/9.298/9.521/0.176 ms

有需要的人可以測試看看了...

在 AWS 上用 pfSense 串接的細節

這邊講的是在 AWS 上想要串接不同帳號的流量 (也就是 site-to-site VPN),不使用 AWS 自己提供的串接服務,而是用 pfSense 串接。

會自己搞主要有幾個考慮:

  • 考慮到 AWS Transit Gateway 的費用,每掛一個上去就要多收一次錢,另外上面處理的流量要再收費。
  • 應用的流量不大,所以用個 t2.nano 跑也有個 100Mbps 左右的 capacity,算是夠用了。
  • 而且應用在寫的時候也考慮到斷線後的處理,加上用戶端的網路本來就不怎麼穩定,AWS Transit Gateway 的 SLA 再怎麼高,我也還是得處理斷線時的後續機制,不如就不要那麼緊張...

在設定的時候要注意的事情:

  • EC2 的 Source IP/Destination IP 檢查要關掉,這算是基本盤。
  • VPC 內的 Routing 要確認過一輪。
  • EC2 上的 Security Group 對於 pfSense 的主機得全開,因為 pfSense 會丟出不屬於他自己 IP address 的封包,也會接收不屬於自己 IP address 的封包 (透過上面提到的 routing),這些都還是會經過 Security Group 的檢查,而 Security Group 能設定的數量有限,基本上應該會全開...
  • pfSense 在設完 IPsec 後,同樣在 pfSense 上面的 firewall 需要手動加開,因為預設是關的。

其實這套作法就是在 AWS 還沒推出 Transit Gateway 前的作法,只是老方法還是很好用...

AWS 提供 VPC Traffic Mirroring 的功能

以前在機房可以在 switch 上用 port mirror 看流量內容找問題,現在在 AWS 上也提供類似的功能 VPC Traffic Mirroring:「New – VPC Traffic Mirroring – Capture & Inspect Network Traffic」。

所以所有以前在傳統機房使用 switch 的技術,都可以在 AWS 上重新發展出來,所以不算太意外的是第一波就有一堆 partner 提供服務,或是一些公司提供經驗。

另外 AWS 的 VPC Traffic Mirroring 比以前 switch 的 port mirror 更彈性,可以把整個網路當來源,或是指定特定的 ENI 當來源:

Mirror Source – An AWS network resource that exists within a particular VPC, and that can be used as the source of traffic. VPC Traffic Mirroring supports the use of Elastic Network Interfaces (ENIs) as mirror sources.

然後除了可以打到 ENI 上,也可以打到 NLB 上:

Mirror Target – An ENI or Network Load Balancer that serves as a destination for the mirrored traffic. The target can be in the same AWS account as the Mirror Source, or in a different account for implementation of the central-VPC model that I mentioned above.

不免俗的,可以過濾封包:

Mirror Filter – A specification of the inbound or outbound (with respect to the source) traffic that is to be captured (accepted) or skipped (rejected). The filter can specify a protocol, ranges for the source and destination ports, and CIDR blocks for the source and destination. Rules are numbered, and processed in order within the scope of a particular Mirror Session.

然後有判斷 session 的能力 (看這邊的敘述,應該就是指 stateful connection?):

Traffic Mirror Session – A connection between a mirror source and target that makes use of a filter. Sessions are numbered, evaluated in order, and the first match (accept or reject) is used to determine the fate of the packet. A given packet is sent to at most one target.

而且這一次公佈就幾乎開放所有區域了,費用看起來也不太貴:

VPC Traffic Mirroring is available now and you can start using it today in all commercial AWS Regions except Asia Pacific (Sydney), China (Beijing), and China (Ningxia). Support for those regions will be added soon. You pay an hourly fee (starting at $0.015 per hour) for each mirror source; see the VPC Pricing page for more info.

Tumblr 對 Yahoo! 流量的影響

也是在 Twitter 上看到的,Yahoo! (Oath) 決定對 Tumblr 上的 NSFW 類圖片下手後對流量的影響:

這張圖應該是出自「Traffic Analysis for port 36:"(36) Yahoo (AS 10310)"」這邊的資料,可以看到頗明顯的下降... 不過九月那個掉下來的部份又是什麼啊 :o

Sandvine 對全球網路流量的分析,那兩個是怎麼上榜的...

看到「Netflix Dominates Internet Traffic Worldwide, BitTorrent Ranks Fifth」這篇報導了 Sandvine 對全球網路流量的分析,主要是這張:

大多數的應用都不算意外 (只是差在各地區的使用習慣),但 AMERICAS 的第十名 (XBOX LIVE UPDATE) 跟 EMEA 的第七名 (PLAYSTATION DOWNLOAD) 是怎麼一回事 XDDD

依照交通時間,評估各種「地點」的服務...

Oalley 這個服務利用「交通時間」來評估地點,最簡單的應用像是給一個地點與時間,畫出範圍:

或是給多個點以及時間來評估地點:

不過後面是用 OpenStreetMap 的資料,我丟了幾個中文測試好像都找不到,用英文倒是可以...

CloudFront 在東京開到第八個點了...

看到 Amazon CloudFront 宣佈在東京開到第八個點了:「Amazon CloudFront launches eighth Edge location in Tokyo, Japan」。

Amazon CloudFront announces the addition of an eighth Edge location in Tokyo, Japan. The addition of another Edge location continues to expand CloudFront's capacity in the region, allowing us to serve increased volumes of web traffic.

這個成長速度有點驚人,一月才加了兩個,現在又要再加一個... 不過大阪還是維持一個。

各種道路設計對於流量的影響?

在「The rates of traffic flow on different kinds of 4-way intersections」這邊看到有趣的東西,利用遊戲 Cities: Skylines 模擬各種道路設計對流量的影響:

This is an animation of traffic flows simulated on 30 different kinds of four-way junctions, from two roads intersecting with no traffic lights or signs to complex stacked interchanges that feature very few interactions between individual cars. It was recorded in a game called Cities: Skylines, a more realistic take on SimCity.

影片在這:

記得這是模擬,實際情況會有其他考量,所以裡面的結果參考就好...

然後裡面有看到很多常見的設計,還有一些沒看過的神奇設計 XD 另外有些設計超級複雜,第一次開的人真的會知道怎麼開嗎 XDDD

俄羅斯的 BGP traffic reroute...

前幾天 (12 號) BGPmon 發現有很多知名的網段被導去俄羅斯:「Popular Destinations rerouted to Russia」。

Early this morning (UTC) our systems detected a suspicious event where many prefixes for high profile destinations were being announced by an unused Russian Autonomous System.

可以看到相當多知名的網段都被導走:

Starting at 04:43 (UTC) 80 prefixes normally announced by organizations such Google, Apple, Facebook, Microsoft, Twitch, NTT Communications and Riot Games were now detected in the global BGP routing tables with an Origin AS of 39523 (DV-LINK-AS), out of Russia.

從圖中也可以看出來 AS39523 透過 AS31133 發出這些 routing,然後主要是透過 AS6939 (Hurricane Electric) 擴散:

這幾年俄羅斯在網路上的動作多很多...

AWS CodeDeploy 支援在 AWS Lambda 上跑更多奇怪花樣

AWS Lambda 總算可以跟 AWS CodeDeploy 深度整合了:「AWS Lambda Supports Traffic Shifting and Phased Deployments with AWS CodeDeploy」。

像是兩個版本之間依照權重轉移:

You can now shift incoming traffic between two AWS Lambda function versions based on pre-assigned weights.

而把這個功能延伸就會是 Phased Deployments 了...