拿 WireGuard 當作 SOCKS5 Proxy 的用戶端

pufferffish/wireproxyHacker News 上看到「Wireproxy: WireGuard client that exposes itself as a HTTP/SOCKS5 proxy (github.com/pufferffish)」這個有趣的東西,GitHub 上的專案頁在 pufferffish/wireproxy 這邊:

A wireguard client that exposes itself as a socks5/http proxy or tunnels.

不需要連上完整版本的 WireGuard VPN,而是透過 proxy 協定讓使用者可以使用,這樣配合瀏覽器的工具 (像是 SwitchyOmega) 可以很方便的設定各種變化,像是針對日本限定的網站走日本的 VPS instance 出去。

另外值得一提的是,這是一個 userland 而且不需要 root 權限的實作;這點蠻容易可以理解的,只是聽一個 TCP port 然後用 WireGuard protocol 跟遠端溝通,的確是不需要 root 權限。

另外在 Hacker News 的留言裡面還看到了 aramperes/onetun 這個工具,看起來是 server 端的實作,支援 TCP 與 UDP:

A cross-platform, user-space WireGuard port-forwarder that requires no root-access or system network configurations.

這兩個看起來剛好可以搭在一起...?

當年 Facebook 透過 VPN 記錄使用者活動細節的目的

2019 年年初的時候 TechCrunch 爆出 Facebook 透過付錢給使用者,透過 VPN (以及安裝 Root CA) 記錄使用者的行為:「Facebook 花錢向使用者購買他們的行為記錄」,最近揭露的文件透漏了當年的目的:「Facebook snooped on users’ Snapchat traffic in secret project, documents reveal」。

TC 這邊的文章裡面沒看到信件,另外找了其他報導:「Project Ghostbusters: Facebook Accused of Using Your Phone to Wiretap Snapchat」,裡面有兩份資料是信件往來的部分:「Document 735」、「Document 736」。

裡面可以看到想要取得 SnapchatYouTubeAmazon 這些使用行為:

The goal of Facebook’s SSL bump technology was the company’s acquisition, decryption, transfer, and use in competitive decisionmaking of private, encrypted in-app analytics from the Snapchat, YouTube, and Amazon apps, which were supposed to be transmitted over a secure connection between those respective apps and secure servers (sc-analytics.appspot.com for Snapchat, s.youtube.com and youtubei.googleapis.com for YouTube, and *.amazon.com for Amazon). Id.

然後信裡還有提到是用 Squid 實作的:

Today we are using the Onavo vpn-proxy stack to deploy squid with ssl bump the stack runs in edge on our own hosts (onavopp and onavolb) with a really old version of squid (3.1).

這次的訴訟裡提到了 18 U.S. Code § 2511 - Interception and disclosure of wire, oral, or electronic communications prohibited,看起來會是聯邦層級的刑事案件...

那是個還不流行 certificate pinning 的年代...?

穿越 Firewall 的作法

先看到「SSH over HTTPS (trofi.github.io)」這篇,原文「SSH over HTTPS」講怎麼利用 HTTPS 加上 CONNECT 指令穿過去。

作者有先介紹背景,他需要在醫院待個幾天,而醫院有免費的 WiFi 可以上網,但限制很多,基本上 TCP 的部分只有 80/tcp 與 443/tcp 會通,另外他有行動網路可以用 (但應該不是吃到飽的?),可以當作在現場直接設定 bypass 機制的工具:

I planned to spend 1-2 days in the hospital and did not plan to use the laptop.But now I am stuck here for the past two weeks and would like to tinker on small stuff remotely. The hospital has free Wi-fi access.

The caveat is that hospital blocks most connection types. It allows you only to do HTTP (TCP port 80) and HTTPS (TCP port 443) connections for most hosts. DNS (UDP port 53) and DoT (TCP port 853) seem to work as well at least for well-known DNS providers.

But SSH (TCP port 22 or most other custom ports) is blocked completely.

I wondered how hard would it be to pass SSH through HTTP or HTTPS. I had a GSM fallback so I could reconfigure remote server and try various solutions.

作者的方法就是在 TLS/SSL connection 上面跑 SSH,以前幹過,但就如同 comment 裡面提到的,Cisco 的 AnyConnect (主要是用 open-source client 的 OpenConnect 以及 open-source server 的 ocserv) 比較彈性,而且 AnyConnect 的協定會自動嘗試 UDP-based 的 DTLS,傳輸效率會比 TCP-based 的協定好,另外在 iOS 上可以直接裝 app store 裡面 Cisco 官方的 client 來用。

但從作者的其他文章看起來應該也是熟門熟路了,會這樣做應該是手上有 HTTPS 的 apache server 可以設定來用。

另外作者雖然沒寫出來,但想法應該是有 SSH 就可以在 command line 透過 -D 生出 SOCKS 服務當 proxy 讓其他程式使用,常見的應用程式大多都支援。

應該就是臨時要在醫院裡面待個一兩天時的暫時性方案,如果常態會遇到的話應該是會架 ocserv 來繞...

Mullvad 也推出了自家的瀏覽器

Hacker News 上看到 VPN 廠商 Mullvad 也推出了自家的瀏覽器,Mullvad Browser:「The Mullvad Browser (mullvad.net)」。

改自 Tor Browser,最底層是 Firefox,然後開起來看,內建了三個套件:

然後因為是基於 Tor Browser,所以許多的預設值會更偏向強化隱私性的設定,這點可以從 fingerprint.com 這邊測發現,每次重開瀏覽器會是不一樣的值,這代表用 canvas 的網站有一定機會會掛掉... 另外會有一些不方便的地方,像是時間相關的設定因為要隱藏 timezone,所以 server 端無法取得 client 的正確時間資訊。

而在 FAQ 裡面有提到,Mullvad Browser 不允許你透過 cookie 記錄登入資訊:

How do I stay logged into specific websites between sessions?
It’s not possible. It’s an action to combat tracking.

所以這個瀏覽器的定位不會是給你當作一般用的... 但這樣的話我更偏好用 Tor Browser?

Tailscale Funnel 公開測試

去年 11 月的時候寫過 Tailscale Funnel 的消息:「Tailscale 也推出類似 Cloudflare Tunnel 的產品,Tailscale Funnel」,剛剛則是宣佈公開測試:「Tailscale Funnel now available in beta」。

測了一下,設定不算太難,但有些步驟要做:

  • 在網頁上先允許 Access Controls 內的 Funnel。
  • 然後允許 DNS 內的 HTTPS Certificates。
  • 剩下就是照著文件跑,在本地跑對應的指令。

他的想法是把 *.ts.net 這個網域拿出來 internet 上面用,但他希望 Let's Encrypt 簽的 SSL key 應該要放在本地端,而不是讓 Tailscale 看到內容。

但同樣也考慮到使用者的體驗,Tailscale 不需要使用者自己處理 SSL certificate 一堆事情,而是實做在 tailscaled 這隻程式裡面,讓本地端的程式還是走 HTTP 而非 HTTPS。

對應的,使用者如果想要取得對應的 key 與 cert,也可以透過 tailscale cert 指令取得。

測了一下位置,HiNet 的使用者會透過東京的節點轉進來,拿來跑測試開發機應該是夠用。

以他的用法來說,跟 Cloudflare Funnel 的定位不太一樣,反倒是擠壓 ngrok 這類服務。

Google One 的使用者都有 VPN 服務可以用了...?

MacRumors 上面看到 Google 提供 VPN 服務給所有付費的 Google One 使用者使用:「All Paid Google One Subscribers Now Get VPN Access」。而 Google 的公告在這裡:「New security features for all Google One plans」。

主打隱私性,避免被追蹤 IP 位置之類的:

VPN by Google One adds more protection to your internet activity no matter what apps or browsers you use, shielding it from hackers or network operators by masking your IP address. Without a VPN, the sites and apps you visit could use your IP address to track your activity or determine your location.

看到這則留言,很貼切 XDDD

Google running a VPN is like McDonalds running an exercise gym.

不了謝謝 XDDD

Tailscale 也推出類似 Cloudflare Tunnel 的產品,Tailscale Funnel

Tailscale 也推出了類似 Cloudflare Tunnel 的產品,叫做 Tailscale Funnel:「Introducing Tailscale Funnel」。

都是透過 server 本身主動連到 Cloudflare 或是 Tailscale 的伺服器上,接著外部的 request 就可以繞進來了。

不過 Tailscale Funnel 的定位跟 Cloudflare Tunnel 有些差異,看起來 Tailscale Funnel 比較偏向給 dev/stage 環境的用法,Cloudflare Tunnel 像是要跑在 production 的設計?

目前是 alpha 階段,有些限制,像是目前能開的 port:

The ports you can specify to expose your servers on are currently to 443, 8443 and 10000.

另外從技術面上看起來一定得用 TLS 連線,因為他得透過 TLS 的 SNI 資訊來決定是誰:

We can only see the source IP and port, the SNI name, and the number of bytes passing through.

對於已經有用 Tailscale 的使用者來說好像可以玩看看,但另外一點是 Cloudflare 的機房密度很高,這點可能是 Tailscale 也要想一下的問題?

跨雲端的 Zero Downtime 轉移

看到「Ask HN: Have you ever switched cloud?」這個討論,在講雲端之間的搬遷,其中 vidarh 的回答可以翻一下...

首先是他提到原因的部份,基本上都是因為錢的關係,從雲搬到另外一個雲,然後再搬到 Dedicated Hosting 上:

Yes. I once did zero downtime migration first from AWS to Google, then from Google to Hetzner for a client. Mostly for cost reasons: they had a lot of free credits, and moved to Hetzner when they ran out.

Their savings from using the credits were at least 20x what the migrations cost.

然後他也直接把整理的資料丟出來,首先是在兩端上都先建立 load balancer 類的服務:

* Set up haproxy, nginx or similar as reverse proxy and carefully decide if you can handle retries on failed queries. If you want true zero-downtime migration there's a challenge here in making sure you have a setup that lets you add and remove backends transparently. There are many ways of doing this of various complexity. I've tended to favour using dynamic dns updates for this; in this specific instance we used Hashicorp's Consul to keep dns updated w/services. I've also used ngx_mruby for instances where I needed more complex backend selection (allows writing Ruby code to execute within nginx)

再來是打通內網,其實就是 site-to-site VPN:

* Set up a VPN (or more depending on your networking setup) between the locations so that the reverse proxy can reach backends in both/all locations, and so that the backends can reach databases both places.

然後建立資料庫的 replication server 以及相關的機制:

* Replicate the database to the new location.

* Ensure your app has a mechanism for determining which database to use as the master. Just as for the reverse proxy we used Consul to select. All backends would switch on promoting a replica to master.

* Ensure you have a fast method to promote a database replica to a master. You don't want to be in a situation of having to fiddle with this. We had fully automated scripts to do the failover.

然後是確認 application 端可以切換自如:

* Ensure your app gracefully handles database failure of whatever it thinks the current master is. This is the trickiest bit in some cases, as you either need to make sure updates are idempotent, or you need to make sure updates during the switchover either reliably fail or reliably succeed. In the case I mentioned we were able to safely retry requests, but in many cases it'll be safer to just punt on true zero downtime migration assuming your setup can handle promotion of the new master fast enough (in our case the promotion of the new Postgres master took literally a couple of seconds, during which any failing updates would just translate to some page loads being slow as they retried, but if we hadn't been able to retry it'd have meant a few seconds downtime).

然後確認新的雲端有足夠的 capacity 撐住流量後,就是要轉移了,首先是降低 DNS TTL:

Once you have the new environment running and capable of handling requests (but using the database in the old environment):

* Reduce DNS record TTL.

然後把舊的 load balancer 指到新的後端,這時候如果發現問題可以快速 rollback 回來:

* Ensure the new backends are added to the reverse proxy. You should start seeing requests flow through the new backends and can verify error rates aren't increasing. This should be quick to undo if you see errors.

接著把 DNS 指到新的 load balancer,理論上應該不會有太大問題:

* Update DNS to add the new environment reverse proxy. You should start seeing requests hit the new reverse proxy, and some of it should flow through the new backends. Wait to see if any issues.

接著把資料庫切到新的機房,有問題時可以趕快切回去再確認哪邊有狀況:

* Promote the replica in the new location to master and verify everything still works. Ensure whatever replication you need from the new master works. You should now see all database requests hitting the new master.

最後的階段就是拔掉舊的架構:

* Drain connections from the old backends (remove them from the pool, but leave them running until they're not handling any requests). You should now have all traffic past the reverse proxy going via the new environment.

* Update DNS to remove the old environment reverse proxy. Wait for all traffic to stop hitting the old reverse proxy.

* When you're confident everything is fine, you can disable the old environment and bring DNS TTL back up.

其實這個方法跟雲端沒什麼關係,以前搞機房搬遷的時候應該都會規劃過類似的方案,大方向也都類似 (把 stateful services 與 stateless services 拆開來分析),只是不像雲端的彈性租賃,硬體要準備比較多...

我記得當年 Instagram 搬進 Facebook 機房的時候也有類似的計畫,之前有提過:「Instagram 從 AWS 搬到 Facebook 機房」。

台灣最近的話,好像是 PChome 24h 有把機房搬到 GCP 上面?看看他們之後會不會到 GCP 的場子上發表他們搬遷的過程...

透過 SOCKS5 界面連進 WireGuard 網段的軟體 wireproxy

Hacker News 上看到「A userspace WireGuard client that exposes itself as a proxy (github.com/octeep)」看到這個有趣的東西,可以把自己當作是一個 WireGuard client,然後透過 SOCKS5 界面讓使用者使用... 專案則是在 GitHub 上的「Wireguard client that exposes itself as a socks5 proxy」這邊可以看到。

除了軟體本身有支援 SOCKS5 的可以用以外,另外可以搭配透過 LD_PRELOAD 把 TCP 連線都轉進 SOCKS5 服務的套件來用,像是 tsocks 或是 redsocks 這種工具。

然後這包東西是用 Golang 寫的,好像剛好可以拿來練手包 Ubuntu PPA...

用 Tailscale 取代個人的 VPN

Tailscale 是個基於 WireGuard 的 VPN 服務,基本的邏輯是所有的機器都連上 VPN,然後 Tailscale 建立一組 CGNAT 網段的內部網路讓你可以互連,另外也可以透過這些 VPN 設定 exit node 連外:

另外的一個特點是他把 Hole punching 的方式包好了,可以打通兩個都在 NAT 後面的機器 (大多數的狀態都可以成功),不需要透過 VPN hub 代轉流量,於是 latency 會低很多 (因為大多數在台灣都沒有 VPN hub)。

也因為不太需要 VPN hub,對 Tailscale 來說營運的成本就沒那麼高,所以 Tailscale 有提供個人可以用的免費版本,提供 20 個 devices 連上同一個內部網段。

以前在外面的咖啡廳用網路會習慣透過 VPN server 稍微保護一下連線,現在看起來可以用 Tailscale 取代掉... 家裡的 HiNet 桌機或是 VPS 的機器都可以拿來當 exit node。

另外還有 open source 的 headscale 專案可以看,如果想要完全更高的安全性,完全自己 host 的話...