Google Cloud Platform 在台灣的機房可以開 Standard Network 的機器了

Google Cloud Platform 一開始是提供 Premium Network,會透過 Google 自家的網路骨幹連到最近的點,然後再透過當地的機房交換出去,這樣可以確保頻寬的穩定性,但成本當然也就比較高...

後來提供了 Standard Network 則是從機房出去後就直接交換,成本會比較低 (參考「Network Service Tiers - Custom Cloud Network」這篇),但在台灣的機房一直都沒有提供 Standard Network (好像是需要另外申請?),所以我每個月月底的時候都會測一下看看開放了沒... 然後剛剛發現可以開起來了,不確定是已經全開了還是分批開。

測了一下發現網路相當... 爛?是還在調整嗎...

像是 1.1.1.1 的 latency 很高 (自家的 8.8.8.8 當然就沒這個問題):

PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=49 time=48.9 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=49 time=48.5 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=49 time=48.5 ms

--- 1.1.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 48.554/48.691/48.964/0.193 ms

然後 168.95.1.1139.175.1.1 也都很差 (61.31.1.1 不給 ping):

PING 168.95.1.1 (168.95.1.1) 56(84) bytes of data.
64 bytes from 168.95.1.1: icmp_seq=1 ttl=239 time=21.2 ms
64 bytes from 168.95.1.1: icmp_seq=2 ttl=239 time=21.3 ms
64 bytes from 168.95.1.1: icmp_seq=3 ttl=239 time=21.4 ms

--- 168.95.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 21.276/21.367/21.454/0.072 ms
PING 139.175.1.1 (139.175.1.1) 56(84) bytes of data.
64 bytes from 139.175.1.1: icmp_seq=1 ttl=53 time=63.4 ms
64 bytes from 139.175.1.1: icmp_seq=2 ttl=53 time=62.9 ms
64 bytes from 139.175.1.1: icmp_seq=3 ttl=53 time=62.9 ms

--- 139.175.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 62.967/63.139/63.455/0.303 ms

不過學術網路倒是還不錯:

PING 140.112.2.2 (140.112.2.2) 56(84) bytes of data.
64 bytes from 140.112.2.2: icmp_seq=1 ttl=51 time=5.13 ms
64 bytes from 140.112.2.2: icmp_seq=2 ttl=51 time=4.40 ms
64 bytes from 140.112.2.2: icmp_seq=3 ttl=51 time=4.52 ms

--- 140.112.2.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 4.405/4.690/5.138/0.325 ms
PING 140.113.250.135 (140.113.250.135) 56(84) bytes of data.
64 bytes from 140.113.250.135: icmp_seq=1 ttl=55 time=5.87 ms
64 bytes from 140.113.250.135: icmp_seq=2 ttl=55 time=5.97 ms
64 bytes from 140.113.250.135: icmp_seq=3 ttl=55 time=6.11 ms

--- 140.113.250.135 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 5.872/5.987/6.119/0.135 ms
PING 140.117.11.1 (140.117.11.1) 56(84) bytes of data.
64 bytes from 140.117.11.1: icmp_seq=1 ttl=242 time=9.52 ms
64 bytes from 140.117.11.1: icmp_seq=2 ttl=242 time=9.17 ms
64 bytes from 140.117.11.1: icmp_seq=3 ttl=242 time=9.20 ms

--- 140.117.11.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 9.172/9.298/9.521/0.176 ms

有需要的人可以測試看看了...

在 AWS 上用 pfSense 串接的細節

這邊講的是在 AWS 上想要串接不同帳號的流量 (也就是 site-to-site VPN),不使用 AWS 自己提供的串接服務,而是用 pfSense 串接。

會自己搞主要有幾個考慮:

  • 考慮到 AWS Transit Gateway 的費用,每掛一個上去就要多收一次錢,另外上面處理的流量要再收費。
  • 應用的流量不大,所以用個 t2.nano 跑也有個 100Mbps 左右的 capacity,算是夠用了。
  • 而且應用在寫的時候也考慮到斷線後的處理,加上用戶端的網路本來就不怎麼穩定,AWS Transit Gateway 的 SLA 再怎麼高,我也還是得處理斷線時的後續機制,不如就不要那麼緊張...

在設定的時候要注意的事情:

  • EC2 的 Source IP/Destination IP 檢查要關掉,這算是基本盤。
  • VPC 內的 Routing 要確認過一輪。
  • EC2 上的 Security Group 對於 pfSense 的主機得全開,因為 pfSense 會丟出不屬於他自己 IP address 的封包,也會接收不屬於自己 IP address 的封包 (透過上面提到的 routing),這些都還是會經過 Security Group 的檢查,而 Security Group 能設定的數量有限,基本上應該會全開...
  • pfSense 在設完 IPsec 後,同樣在 pfSense 上面的 firewall 需要手動加開,因為預設是關的。

其實這套作法就是在 AWS 還沒推出 Transit Gateway 前的作法,只是老方法還是很好用...

Hacker News

早上看到「Tell HN: Thank you for not redesigning Hacker News」這篇,作者在網路速度受限的地區,上各種網站幾乎都不會動,但 Hacker News 沒有改用一堆前端框架,而是保留使用 HTML 反而讓頁面維持極小:

I’m currently in a country with low speed internet and the entire ‘modern’ web is basically unusable except HN, which still loads instantly. Reddit, Twitter, news and banking sites are all painfully slow or simply time out altogether.

To PG, the mods and whoever else is responsible: thank you for not trying to ‘fix’ what isn’t broken.

順手開了一下網路工具來看,發現單一元件最大的居然是 favicon XDDD:

Wikipedia 上列出來的相容性,如果只支援 IE11+ 的話,看起來可以改用 PNG,大小就已經有明顯的改善了:

-rw-r--r-- 1 gslin staff 7527 Sep  2 09:50 favicon.ico
-rw-r--r-- 1 gslin staff 2598 Sep  2 09:51 favicon.png

AWS System Manager 支援 SSH 的 -L 功能 (Port Forwarding)

AWS System Manager 宣佈支援了 SSH Port Forwarding 的功能 (也就是 OpenSSH 指令裡的 -L):「New – Port Forwarding Using AWS System Manager Sessions Manager」。

以往是連到那台主機上後,再透過 -R 反過來再穿一層,但這必須在那台機器有 Internet 存取權限的情況下才有辦法做... 這次的方法是 AWS System Manager 直接提供了,所以只需要可以連到主機就可以做。

AWS Client VPN 支援 Split-tunnel

VPN 的 Split-tunnel 指的是 partial routing,也就只針對部份 IP range 走進 VPN,其餘大多數的流量還是走原來的 Internet。

這個方式的安全性通常會比 full routing 低一些,因為這個方式會使得 internet 流量有機會穿進 VPN 內 (像是透過瀏覽器),但因為這可以讓使用者避免越洋的 VPN 導致速度下降過多,算是 VPN 常用的功能。

這次 AWS Client VPN 實做了這個功能:「AWS Client VPN now adds support for Split-tunnel」。

不過 AWS Client VPN 相較於自己架設貴不少,目前知道的單位大多也都還是自己架...

用 Machine Learning 改善 Streaming 品質的服務與論文

Hacker News 上看到「Puffer」這個服務,裡面利用了 machine learning algorithm 動態調整 bitrate,以提昇傳輸品質。

測試得到的數據後來被整理起來一起放進論文:「Continual learning improves Internet video streaming」。

在開頭介紹了 Fugu 這個演算法:

We describe Fugu, a continual learning algorithm for bitrate selection in streaming video.

而 Puffer 就是實驗網站:

We evaluate Fugu with Puffer, a public website we built that streams live TV using Fugu and existing algorithms. Over a nine-day period in January 2019, Puffer streamed 8,131 hours of video to 3,719 unique users.

這個站台提供了許多真實的頻道進行測試:

Stream live TV in your browser. There's no charge. You can watch U.S. TV stations affiliated with the NBC, CBS, ABC, PBS, FOX, and Univision networks.

可以看到 Fugu 的結果很好,比起其他提出的方案是全面性的改善:

這邊是用 WebSocket 測試,並且配合了不同的 TCP congestion algorithm,沒有太考慮額外的計算成本...

AWS 提供 VPC Traffic Mirroring 的功能

以前在機房可以在 switch 上用 port mirror 看流量內容找問題,現在在 AWS 上也提供類似的功能 VPC Traffic Mirroring:「New – VPC Traffic Mirroring – Capture & Inspect Network Traffic」。

所以所有以前在傳統機房使用 switch 的技術,都可以在 AWS 上重新發展出來,所以不算太意外的是第一波就有一堆 partner 提供服務,或是一些公司提供經驗。

另外 AWS 的 VPC Traffic Mirroring 比以前 switch 的 port mirror 更彈性,可以把整個網路當來源,或是指定特定的 ENI 當來源:

Mirror Source – An AWS network resource that exists within a particular VPC, and that can be used as the source of traffic. VPC Traffic Mirroring supports the use of Elastic Network Interfaces (ENIs) as mirror sources.

然後除了可以打到 ENI 上,也可以打到 NLB 上:

Mirror Target – An ENI or Network Load Balancer that serves as a destination for the mirrored traffic. The target can be in the same AWS account as the Mirror Source, or in a different account for implementation of the central-VPC model that I mentioned above.

不免俗的,可以過濾封包:

Mirror Filter – A specification of the inbound or outbound (with respect to the source) traffic that is to be captured (accepted) or skipped (rejected). The filter can specify a protocol, ranges for the source and destination ports, and CIDR blocks for the source and destination. Rules are numbered, and processed in order within the scope of a particular Mirror Session.

然後有判斷 session 的能力 (看這邊的敘述,應該就是指 stateful connection?):

Traffic Mirror Session – A connection between a mirror source and target that makes use of a filter. Sessions are numbered, evaluated in order, and the first match (accept or reject) is used to determine the fate of the packet. A given packet is sent to at most one target.

而且這一次公佈就幾乎開放所有區域了,費用看起來也不太貴:

VPC Traffic Mirroring is available now and you can start using it today in all commercial AWS Regions except Asia Pacific (Sydney), China (Beijing), and China (Ningxia). Support for those regions will be added soon. You pay an hourly fee (starting at $0.015 per hour) for each mirror source; see the VPC Pricing page for more info.

Instagram 改善影片上架速度的方式

不是什麼魔法,其實是改產品面上的規格 (但是發表到 Instagram Engineering 上):「Video Upload Latency Improvements at Instagram」。

最原始的版本是所有的格式都轉完後才可以上架:

然後把規格改成最高畫質的版本轉完後就可以先上架:

The idea is, instead of blocking until all video versions are available, we can publish the video once the highest-quality video version is available.

然後是把影片切段上傳,所以傳一半就可以先處理一半,變成 pipeline 的概念,但增加程式的複雜度,以及被迫要調整影片品質的參數:

Segmented uploads reduce upload latency in many cases but come with a few tradeoffs. For instance, segmented uploads increase the complexity of the pipeline. There are some quality metrics that are only available per segment at transcode time, such as SSIM. These metrics are not helpful to us on a per segment basis. Therefore, we need to do a duration weighted average of the SSIM of all segments to come up with the SSIM of the whole video. Similarly, handling exceptions is more complex since there are more cases to handle.

另外有一種特例是上傳的影片本身就已經符合伺服器的規格,這樣的話可以直接放行 (不過這樣不會有 security concern 嗎...):

Another performance optimization we use to improve the upload latency and save CPU utilization is something we call a “passthrough” upload. In some cases, the media that is uploaded is already ready for playback on most devices.

都是想的出來而且會帶有 tradeoff 的方法,而不是完全正面的改善 :o

假新聞產生器與偵測器

Hacker News 上看到的消息,是關於「使用類神經網路產生新聞」(也就是透過程式大量產生假新聞),這次的結果包括了「產生」與「偵測」兩個面向:「Grover – A State-of-the-Art Defense Against Neural Fake News (allenai.org)」。

實驗的網站在「Grover - A State-of-the-Art Defense against Neural Fake News」這邊,另外也有論文「Defending Against Neural Fake News」可以讀。

幾個月前,OpenAI 利用類神經網路,研發出「自動寫新聞」的程式,當時他們宣稱因為效果太好,決定不完整公開成果:「Better Language Models and Their Implications」,中文的報導可以參考 iThome 這篇:「AI文字產生技術引發假新聞爭議,OpenAI決定只公開部份技術成果」。

而現在 The Allen Institute for Artificial Intelligence 則是成功重製了 OpenAI 的成果,取名叫 Grover,發現訓練出來的模型除了可以拿來寫新聞外,也可以拿來偵測文章是不是機器產生的,而且就他們自己測試,辨識成功率還蠻高的:

To study and detect neural fake news, we built a model named Grover. Our study presents a surprising result: the best way to detect neural fake news is to use a model that is also a generator. The generator is most familiar with its own habits, quirks, and traits, as well as those from similar AI models, especially those trained on similar data, i.e. publicly available news. Our model, Grover, is a generator that can easily spot its own generated fake news articles, as well as those generated by other AIs. In a challenging setting with limited access to neural fake news articles, Grover obtains over 92% accuracy at telling apart human-written from machine-written news. Please read our publication for more information.

不過看起來 source code 與 model 還是沒放出來,但看起來遲早會有對應的 open source clone...

我想到在攻殼電視動畫裡面的情報管制戰,雖然電視動畫裡沒有講得很詳細,但感覺這類工具就是其中一環...

幫你的 iPhone 電話簿找到對應的頭像

前幾天看到的:「Announcing Vignette」,透過 social network 的資料,把本來電話簿裡面的 icon 更新:

透過 app store 的搜尋找不太到,我一開始用了「Vignette」搜不到,但用「Vignette Update」就可以。或者你可以透過他提供的連結直接開 app store:「Vignette – Update Contact Pics」。

這是一個 IAP 類的付費服務,搜尋是免費的,但如果要把資料更新回通訊錄,需要付 USD$4.99 (一次性),台灣帳號是付 TWD$170,應該是因為最近的稅務調整:

Vignette allows you to scan your contacts and see what it can find for free. If you wish to actually save these updates to your contact list, you must pay for a one-time in-app purchase. That purchase costs $4.99, is not a subscription, and is the only in-app purchase.

搜尋的範圍包括了 GravatarTwitterFacebookInstagram

Email is used for Gravatar
Twitter
Facebook
A custom network called Instagram

另外作者有提到這個 app 不傳資料到伺服器上,都是在自己的裝置上連到上面提到的 social network 尋找:

Privacy is paramount
All the processing is done on-device; this isn’t the sort of app where your contacts are uploaded en masse to some server, and out of your control.

所以速度不會太快,但對隱私比較好...