Home » Posts tagged "ddos"

Cloudflare 推出 Spectrum:65535 個 TCP Port 都可以轉的 Proxy...

Cloudflare 推出了 Spectrum,文章標題提到的 65533 應該是指 80 & 443 以外其他的 port:「Introducing Spectrum: Extending Cloudflare To 65,533 More Ports」。

然後因為 TCP proxy 不像 HTTP proxy 與 WebSocket proxy 可以靠 Host header 資訊判斷,在 TCP proxy 需要獨占 IP address 使用 (i.e. 一個 IP address 只能給一個客戶用),而因為 IPv4 address 不夠的關係,這個功能只開放給 Enterprise 客戶用:

Today we are introducing Spectrum, which brings Cloudflare’s security and acceleration to the whole spectrum of TCP ports and protocols for our Enterprise customers.

雖然現在限定在 Enterprise 客戶,但 Cloudflare 還是希望看看有沒有其他想法,目前提出來的選項包括了開放 IPv6 address 給所有人用,或是變成獨立付費項目:

Why just Enterprise? While HTTP can use the Host header to identify services, TCP relies on each service having a unique IP address in order to identify it. Since IPv4 addresses are endangered, it’s quite expensive for us to delegate an IP per application and we needed to limit use. We’re actively thinking about ways to bring Spectrum to everyone. One idea is to offer IPv6-only Spectrum to non-Enterprise customers. Another idea is let anyone use Spectrum but pay for the IPv4 address. We’re not sure yet, but if you prefer one to the other, feel free to comment and let us know.

類似的產品應該是 clean pipe 類的服務,但一般 clean pipe 是透過 routing 重導清洗流量,而非像 Cloudflare 這樣設計... 不知道後續會有什麼樣的變化。

GitHub 在 2/28 遭受的攻擊...

GitHub 在 2/28 遭受 DDoS 攻擊,蠻快就把事故報告丟出來了:「February 28th DDoS Incident Report」。

不過跟 GitHub 其他文章不太一樣,這篇算是 PR 稿吧,簡單來說就是花錢買 Akamai Prolexic 的過濾服務解決... Akamai 方的 PR 稿則是在「Memcached-fueled 1.3 Tbps attacks - The Akamai Blog」這邊可以看到。

17:21 UTC 發現問題,然後判斷超過 100Gbps,所以 17:26 決定讓 Akamai Prolexic 接管過濾:

At 17:21 UTC our network monitoring system detected an anomaly in the ratio of ingress to egress traffic and notified the on-call engineer and others in our chat system. This graph shows inbound versus outbound throughput over transit links:

Given the increase in inbound transit bandwidth to over 100Gbps in one of our facilities, the decision was made to move traffic to Akamai, who could help provide additional edge network capacity. At 17:26 UTC the command was initiated via our ChatOps tooling to withdraw BGP announcements over transit providers and announce AS36459 exclusively over our links to Akamai. Routes reconverged in the next few minutes and access control lists mitigated the attack at their border. Monitoring of transit bandwidth levels and load balancer response codes indicated a full recovery at 17:30 UTC. At 17:34 UTC routes to internet exchanges were withdrawn as a follow-up to shift an additional 40Gbps away from our edge.

就這樣而已,完全就是 PR 稿 XDDD

Akamai 阻擋 DDoS 能力的上限

這應該是最近在看 DDoS 事件中比較重要的新聞了,從這次的事件知道 Akamai 沒有能力擋下某種 620Gbps 以上的 DDoS 攻擊,而這是攻擊者已經有能力「示範」出來的量:「Akamai kicked journalist Brian Krebs' site off its servers after he was hit by a 'record' cyberattack」。

The assault has flooded Krebs' site with more than 620 gigabits per second of traffic — nearly double what Akamai has seen in the past.

然後現在 Krebs on Security 的整個站台都轉移到 GoogleProject Shield 計畫上了,接下來就是時間的考驗了:「The Democratization of Censorship」。

GitHub 對抗 TCP SYN Flood 的方式:synsanity

GitHub 提出了自己對抗 TCP SYN Floord 的方式:「SYN Flood Mitigation with synsanity」。

synsanity 是一個 netfilter (iptables) 用的 target,利用現有的理論阻擋 TCP SYN Flood 這種 DDoS:

synsanity is a netfilter (iptables) target for high performance lockless SYN cookies for SYN flood mitigation, as used in production at GitHub.

前人的作法 (SYNPROXY) 以 module 形式運作,需要過濾每一個封包,而這在 GitHub 這種規模上會導致效能不足並且 kernel panic:

This is quite an intrusive way of solving the problem since it touches every packet during the entire connection, but it does successfully mitigate SYN floods. Unfortunately we found that in practise under our load and with the amount of malformed packets we receive, it quickly broke down and caused a kernel panic.

GitHub 所開發的 synsanity 則是透過 netfilter (iptables) 的 target,只處理 initial packets,在撰寫的時候考慮多 CPU 的 lock 問題:

Slack 把服務丟上 CloudFront...

剛剛發現公司的 Slack 網域 kkbox.slack.com 已經上了 CloudFront,測了一些其他的 domain,應該是全部都上了:

gslin@home [~] [15:55/W4] mtr --report kkbox.slack.com
Start: Tue May 17 15:56:06 2016
HOST: home.gslin.org              Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- h254.s98.ts.hinet.net      0.0%    10   16.3  11.7   9.3  18.8   3.1
  2.|-- SNUH-3302.hinet.net        0.0%    10   44.5  13.4   9.0  44.5  10.9
  3.|-- SNUH-3202.hinet.net        0.0%    10    8.8  12.7   8.8  26.8   5.9
  4.|-- TPDT-3012.hinet.net        0.0%    10   12.0  14.3  10.4  19.2   3.1
  5.|-- r4210-s2.hinet.net         0.0%    10    9.8  10.4   9.0  11.4   0.5
  6.|-- 203-75-228-29.HINET-IP.hi  0.0%    10   10.3  10.6   9.2  13.7   1.2
  7.|-- R1011-ASR.tpix.net.tw      0.0%    10   12.1  11.6  10.5  13.2   0.5
  8.|-- 202-133-255-122-static.un  0.0%    10   11.3  10.6   9.8  11.8   0.5
  9.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
 10.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
 11.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
 12.|-- server-54-230-215-138.tpe  0.0%    10   11.9  11.2  10.1  12.2   0.3

這樣先前遇到從 HiNetLevel3 常常爛掉的問題應該是可以解決不少?

Linode 的被攻擊報告

Linode 這陣子一直被 DDoS 攻擊,前幾天放出報告:「The Twelve Days of Crisis – A Retrospective on Linode’s Holiday DDoS Attacks」。

其中這段提到了一些數字,Linode 有個小機房有 40Gbps 的能力,但以現在的 DDoS 規模會馬上爆掉:

Linode’s capacity management strategy for IP transit has been simple: when our peak daily utilization starts approaching 50% of our overall capacity, then it’s time to get more links.

This strategy is standard for carrier networks, but we now understand that it is inadequate for content networks like ours. To put some real numbers on this, our smaller datacenter networks have a total IP transit capacity of 40Gbps. This may seem like a lot of capacity to many of you, but in the context of an 80Gbps DDoS that can’t be blackholed, having only 20Gbps worth of headroom leaves us with crippling packet loss for the duration of the attack.

另外把 DNS 整個放上 CloudFlare 讓他們來擋:

Our nameservers are now protected by Cloudflare, and our websites are now protected by powerful commercial traffic scrubbing appliances.

後續的改善應該還要幾個月?完成後應該會再看到 blog post...

拿 Openvirtuals 的主機跑 Syncthing...

Low End Box 上逛到的主機商 Openvirtuals,在 LEB 上看到的優惠已經沒了,但點進去後看到 Buffalo 的主機年繳有 50% off,加上硬碟空間又大,就決定弄一台玩玩...

SSD-CACHED 的 Mini 是 256MB RAM + 512MB vSwap 以及 90GB 空間,要 USD$16/year,而 Standard 的都是兩倍,但只要 USD$20/year,就決定買 Standard 了...

後台的功能比想像中完整,這是系統資訊與狀態的畫面,功能其實不比 DigitalOcean 差 (不過畫面就普普通通了):

裝了 Ubuntu 14.04 64bits 跑,不過 Linux kernel 偏舊了點,是 2.6.32,查了一下維基百科上的資料,應該是 2009 年底的版本,是目前唯一一個 2.6 上有繼續維護的版本:

Linux two 2.6.32-042stab108.2 #1 SMP Tue May 12 18:07:50 MSK 2015 x86_64 x86_64 x86_64 GNU/Linux

網路的部份,實際測試時發現不是很穩定,HiNet 過去有時候會有不低的 packet loss,可能是中間有線路因為 DDoS 造成不穩定。

反正只是要跑 Synthing 也還好,就先這樣丟著... 上面順便跑個 rtorrent 幫忙 Ubuntu 分擔 ISO Image。

CloudFlare 支援 WebSockets

CloudFlare 的官方 blog 上的公告:「CloudFlare Now Supports WebSockets」。

於是 CloudFlare 自豪的 DDoS 防護服務也涵蓋到 WebSockets 了:

The ability to protect and accelerate WebSockets has been one of our most requested features.

裡面其實還提到一些 CDN + WebSockets 的技術問題 (像是 port 的數量),有興趣的可以再仔細看 :o

BPF (Berkeley Packet Filter)

看到 CloudFlare 的「BPF - the forgotten bytecode」在文章裡提到 BPF (Berkeley Packet Filter),發現從大學畢業後就沒再看過... (然後也沒什麼印象了)

tcpdump 可以把 expression 轉成 BPF bytecode,再丟進 kernel 執行,拿 CloudFlare 文章裡的例子在自己電腦上跑:

gslin@GSLIN-DESKTOP [~] [07:27/W4] sudo tcpdump -p -ni eth1 -d "ip and udp"
(000) ldh      [12]
(001) jeq      #0x800           jt 2    jf 5
(002) ldb      [23]
(003) jeq      #0x11            jt 4    jf 5
(004) ret      #65535
(005) ret      #0

而對於複雜的過濾邏輯而需要拼效能時,可能會需要手動寫 bytecode (像是優先先判斷某些比較容易過濾的欄位,藉以降低判斷的量),可以透過 SOCK_RAWSO_ATTACH_FILTER 直接寫 bytecode 給 kernel 執行。

雖然文章內沒有明講,不過看起來 CloudFlare 有這樣做,尤其後面又有提到:

These kind of rules are very useful, they allow us to pinpoint the malicious traffic and drop it early. Just in the last couple of weeks we dropped 870,213,889,941 packets with few BPF rules. Recently during a flood we saw 41 billion packets dropped throughout a night due to a single well placed rule.

記起來以後說不定用的到...

Archives