用 DNS 控制的後門

在「Wekby APT Gang Using DNS Tunneling for Command and Control」這邊看到用 DNS 做為控制的後門系統,原報導是「New Wekby Attacks Use DNS Requests As Command and Control Mechanism」。

用 DNS 控制的穿透性比 HTTPS 高不少,被拿來做為 APT 類的攻擊威脅高不少...

Google DNS 在台灣的 Latency 大幅降低

對 8.8.8.8 的 ICMP 數據可以發現今天早上 Google 台灣機房的 DNS Service 做了改變,整個 latency 大幅降低。(因為 mtr 發現後面 traceroute 不到以為爆炸了,跑去看 Smokeping 發現其實是 Google 改善了架構)

中華電信北區的機房:

中華電信南區的機房:

台固機房:(內湖)

遠傳機房:(忘記是內湖的哪個機房了...)

遊戲橘子機房:(台北區)

Ubuntu 桌機的 Split DNS

Split DNS 指的是某個 DNS domain 使用另外一組 DNS servers,常用在 Partial Route 的 VPN 上,讓內部網域的 DNS domain 正確的被解出來。一般商業的 VPN Software 都會處理掉這塊,不過有時候還是希望可以自己設定...

Ubuntu 桌機上的 Split DNS 可以透過 Dnsmasq 做到,在我的機器上因為透過 ps awx | grep dnsmasq 可以看到 --conf-dir=/etc/NetworkManager/dnsmasq.d,表示設定的目錄在 /etc/NetworkManager/dnsmasq.d 下,所以我把檔案 mysplit 放到 /etc/NetworkManager/dnsmasq.d 下:

#
server=/mysplit.com/10.1.2.3

然後在 dnsmasq 的 manpage 裡面有提到,SIGUSR{1,2} 是拿來分析用的,而 SIGHUP 不是拿來給你重新讀設定檔用的 XDDD

SIGHUP does NOT re-read the configuration file.

所以就砍掉他,隨便對 NetworkManager 做個動作,就會重新把 dnsmasq 帶起來了,或者重開機也可以... 再跑 dig 查的時候就可以查到資訊了。

Perl 上的 Monkey Patch

這整個週末都在跟 Net::DNS 奮戰 edns-client-subnet,遇到模組內的一小段程式有 bug,先用 monkey patch 硬上,之後再看看要怎麼丟 patch 回 upstream。

monkey patch 的方法主要是參考「How can I monkey-patch an instance method in Perl?」這邊提供的方法而來的。

由於實際的行為是 subroutine redefined (會產生警告訊息),所以要局部關掉 warnings,然後再把整個 subroutine 換掉:

use BugPackage;

{
    no warnings;
    local *BugPackage::bug_function = sub {
        # new code
    };
}

這樣可以在不修改原始模組程式碼的情況下抽換。

Preload、Prefetch 與 Preconnect 的差異

KeyCDN 的「Resource Hints – What is Preload, Prefetch, and Preconnect?」這篇文章介紹了 Preload、Prefetch 與 Preconnect 這三者的差異。

Preconnect 從字面上看就很好理解,而 Preload 與 Prefetch 最大的差異:

Preload is different from prefetch in that it focuses on current navigation and fetches resources with high-priority.

主要在 priority 的差異。

透過搜尋引擎找 Hostname

看到「Fast subdomains enumeration tool for penetration testers」這個專案,可以透過多家搜索引擎找 hostname 出來做滲透測試。

支援五個大的搜尋引擎,以及 NetcraftDNSdumpster

Sublist3r currently supports the following search engines: Google, Yahoo, Bing, Baidu, and Ask. More search engines may be added in the future. Sublist3r also gathers subdomains using Netcraft and DNSdumpster.

不過沒有把 Yandex 放進去...

用 curl 測試 Reserve Proxy 是否正確運作

架好 reverse proxy 後要測試可以用 curl--resolve 的功能來確認。

curl -v --resolve i.kfs.io:443:68.232.45.191 https://i.kfs.io/article5/global/364,324,6v1/original.png > /dev/null

其中 --resolve 的第三個參數一定要用 IP address,你可以看到他的運作原理:

* Added i.kfs.io:443:68.232.45.191 to DNS cache

Route53 的 Health Check 支援 HTTPS SNI 了...

Route53 的 Health Check 總算支援 SNI 了:「Amazon Route 53 Adds SNI Support for HTTPS Health Checks」。

With SNI and HTTPS support, you can now create health checks for secure websites that rely on SNI to serve the correct website and certificate to requests for a particular domain name.

這功能早該出現啦,等好久了...

CVE-2015-7547:getaddrinfo() 的 RCE (Remote Code Execution) 慘案

Google 寫了一篇關於 CVE-2015-7547 的安全性問題:「CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow」。

Google 的工程師在找 OpenSSH 連到某台特定主機就會 segfault 的通靈過程中,發現問題不在 OpenSSH,而是在更底層的 glibc 導致 segfault:

Recently a Google engineer noticed that their SSH client segfaulted every time they tried to connect to a specific host. That engineer filed a ticket to investigate the behavior and after an intense investigation we discovered the issue lay in glibc and not in SSH as we were expecting.

由於等級到了 glibc 這種每台 Linux 都有裝的情況,在不經意的情況下發生 segfault,表示在刻意攻擊的情況下可能會很糟糕,所以 Google 投入了人力研究,想知道這個漏洞到底可以做到什麼程度:

Thanks to this engineer’s keen observation, we were able determine that the issue could result in remote code execution. We immediately began an in-depth analysis of the issue to determine whether it could be exploited, and possible fixes. We saw this as a challenge, and after some intense hacking sessions, we were able to craft a full working exploit!

在研究過程中 Google 發現 Red Hat 的人也在研究同樣的問題:「(CVE-2015-7547) - In send_dg, the recvfrom function is NOT always using the buffer size of a newly created buffer (CVE-2015-7547)」:

In the course of our investigation, and to our surprise, we learned that the glibc maintainers had previously been alerted of the issue via their bug tracker in July, 2015. (bug). We couldn't immediately tell whether the bug fix was underway, so we worked hard to make sure we understood the issue and then reached out to the glibc maintainers. To our delight, Florian Weimer and Carlos O’Donell of Red Hat had also been studying the bug’s impact, albeit completely independently! Due to the sensitive nature of the issue, the investigation, patch creation, and regression tests performed primarily by Florian and Carlos had continued “off-bug.”

攻擊本身需要繞過反制機制 (像是 ASLR),但仍然是可行的,Google 的人已經成功寫出 exploit code:

Remote code execution is possible, but not straightforward. It requires bypassing the security mitigations present on the system, such as ASLR. We will not release our exploit code, but a non-weaponized Proof of Concept has been made available simultaneously with this blog post.

技術細節在 Google 的文章裡也有提到,buffer 大小固定為 2048 bytes,但取得時有可能超過 2048 bytes,於是造成 buffer overflow:

glibc reserves 2048 bytes in the stack through alloca() for the DNS answer at _nss_dns_gethostbyname4_r() for hosting responses to a DNS query.

Later on, at send_dg() and send_vc(), if the response is larger than 2048 bytes, a new buffer is allocated from the heap and all the information (buffer pointer, new buffer size and response size) is updated.

另外 glibc 官方的 mailing list 上也有說明:「[PATCH] CVE-2015-7547 --- glibc getaddrinfo() stack-based buffer overflow」。

用 hosts 搞出來的 Adblock...

在「Amalgamated hosts file」這邊看到超大包的 hosts,拿來擋廣告:

This repo consolidates several reputable hosts files and merges them into a single amalgamated hosts file with duplicates removed.

Currently this amalgamated hosts file contains 27,148 unique entries.

一包 hosts 有兩萬七千筆資料會不會太多了點...

話說不知道能不能 import 進 BIND 或是 Unbound 裡面直接讓整個組織用?