Home » Computer » Network » Archive by category "DNS"

Cloudflare 推出 1.1.1.1 的 DNS Resolver 服務

Cloudflare 推出了 1.1.1.1 上的 DNS Resolver 服務:「Announcing 1.1.1.1: the fastest, privacy-first consumer DNS service」,主打項目是隱私以及效能。

然後因為這個 IP 的特殊性,上面有不少奇怪的流量... 而 Cloudflare 跟 APNIC 交換條件後取得這個 IP address 的使用權 (然後 anycast 發出去):

APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.

We talked to the APNIC team about how we wanted to create a privacy-first, extremely fast DNS system. They thought it was a laudable goal. We offered Cloudflare's network to receive and study the garbage traffic in exchange for being able to offer a DNS resolver on the memorable IPs. And, with that, 1.1.1.1 was born.

Cloudflare 做了效能比較表 (與 Google Public DNSOpenDNS 比較),可以看到平均速度快不少:

在台灣的話,HiNet 非固定制 (也就是 PPPoE 連線的使用者) 連到 8.8.8.8 有奇怪的 latency:

可以比較同一台機器對 168.95.1.1 的反應速度:

不過如果你是 HiNet 固定制 (固 2 或是固 6 IP 那種,不透過 PPPoE,直接設定 IP address 使用 bridge mode 連線的使用者),兩者的 latency 就差不多,不知道是 Google 還是 HiNet 的架構造成的。

另外比較奇怪的一點是在文章最後面提到的 https://1.1.1.1/,理論上不會發 IP-based 的 SSL certificate 才對?不知道 CEO 老大是有什麼誤解... XD

Visit https://1.1.1.1/ from any device to get started with the Internet's fastest, privacy-first DNS service.

Update:查了資料發現是可以發的,只是大多數的 CA 沒有提供而已...

Amazon ECS 的 Service Discovery

AWS 宣佈了 Amazon ECS 也支援 Route 53 提供的 Service Discovery 了:「Introducing Service Discovery for Amazon ECS」。

也就是說現在都整合好了... 比較一下先前需要自己包裝起來套用的方式會少不少功夫:

Previously, to ensure that services were able to discover and connect with each other, you had to configure and run your own service discovery system or connect every service to a load balancer. Now, you can enable service discovery for your containerized services with a simple selection in the ECS console, AWS CLI, or using the ECS API.

AWS 在 2016 年的時候有寫一篇「Service Discovery for Amazon ECS Using DNS」在講怎麼透過事件的觸發配合 AWS Lambda 把服務掛上去或是移除掉:

Recently, we proposed a reference architecture for ELB-based service discovery that uses Amazon CloudWatch Events and AWS Lambda to register the service in Amazon Route 53 and uses Elastic Load Balancing functionality to perform health checks and manage request routing. An ELB-based service discovery solution works well for most services, but some services do not need a load balancer.

現在看起來都可以改用 Auto Naming API 了...

DNSFilter 使用 InfluxDB 與 TimescaleDB 的過程

DNSFilter 這篇講 InfluxDBTimescaleDB 的文章頗有趣的:「Towards 3B time-series data points per day: Why DNSFilter replaced InfluxDB with TimescaleDB」。

在沒有實際用過之前,其實都只能算是一方之詞... 另外這種轉換其實也跟每個公司內的組織組成有關,像是熟悉 PostgreSQL 的單位就比較有機會用 TimescaleDB 解決 time series data 的問題。

不過有個地方倒是讓我想記錄起來:

Comparing TimescaleDB to InfluxDB at the same time — we realized we were losing data. InfluxDB relied on precisely timed execution of rollup commands to process the last X minutes of data into rollups. Combined with our series of rollups, we realized that some slow queries were causing us to lose data. The TimescaleDB data had 1–5% more entries! Also we no longer had to deal with cardinality issues, and could show our customers every last DNS request, even at a monthly rollup.

會掉資料等於是跟 InfluxDB 的使用者發出警訊,要大家確認自己手上的資料是否正確... 這對於正確性要求 100% 的應用就不是開玩笑了 @_@

Amazon Route 53 的 Auto Naming API 可以指到 CNAME 位置了

Amazon Route 53 的 Auto Naming API 可以拿來跑 Service Discovery (參考先前的「用 Amazon Route 53 做 Service Discovery」這篇),當時是 A/AAAA/SRV record,現在則可以註冊 CNAME 了:「Amazon Route 53 Auto Naming Announces Support for CNAME Record Type and Alias to ELB」。

最直接的影響就是 ELB 的部份了,透過 ELB 處理前端的話,覆載平衡以及數量限制的問題就會減輕很多 (之前是靠 Round-robin DNS 打散,而且限制一次最多回應五個 record):

Beginning today, you can use the Amazon Route 53 Auto Naming APIs to create CNAME records when you register instances of your microservices, and your microservices can discover the CNAMEs by querying DNS for the service name. Additionally, you can use the Amazon Route 53 Auto Naming APIs to create Route 53 alias records that route traffic to Amazon Elastic Load Balancers (ELBs).

用 Amazon Route 53 做 Service Discovery

Amazon Route 53 的新功能,可以解決以前自己要建立 Service Discovery 服務的工作:「Amazon Route 53 Releases Auto Naming API for Service Name Management and Discovery」。官方的文件在「Using Autonaming for Service Discovery」這邊。

不過目前有些限制,一個 namespace (domain name) 目前只能有五個服務:

DNS settings for up to five records.

然後 DNS 回應時,最多回八個 record:

When Amazon Route 53 receives a DNS query for the name of an instance, such as backend.example.com, it responds with up to eight IP addresses (for A or AAAA records) or up to eight SRV record values.

回應八個 record,但應該是可以註冊超過八個吧... (i.e. 每次都回不一樣)

自建服務 (像是 Cassandra 或是 ScyllaDB) 可以直接用這個服務掛上去,就不用自己架 Consul 了。

目前支援了這四區,亞洲不在這波提供範圍:

Amazon Route 53 Auto Naming is available in US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) regions.

LinkedIn 忘記續約導致 SSL Certificate 過期

Netcraft 上看到 LinkedIn 出包的消息,這次是 country-mixed 的版本出包:「LinkedIn certificate blunder leaves users LockedOut!」。

在 DNS 上也可以看出來這兩個 CNAME 到不一樣的 load balancer 上:

;; ANSWER SECTION:
www.linkedin.com.       260     IN      CNAME   2-01-2c3e-003c.cdx.cedexis.net.
2-01-2c3e-003c.cdx.cedexis.net. 93 IN   CNAME   pop-ehk1.www.linkedin.com.
pop-ehk1.www.linkedin.com. 3560 IN      A       144.2.3.1
;; ANSWER SECTION:
de.linkedin.com.        86400   IN      CNAME   cctld.linkedin.com.
cctld.linkedin.com.     86400   IN      CNAME   mix.linkedin.com.
mix.linkedin.com.       213     IN      CNAME   pop-ehk1.mix.linkedin.com.
pop-ehk1.mix.linkedin.com. 3546 IN      A       144.2.3.5

SSL Labs 上也看得出來在 Alternative names 的地方是不一樣的:「SSL Server Test: www.linkedin.com (Powered by Qualys SSL Labs)」、「SSL Server Test: de.linkedin.com (Powered by Qualys SSL Labs)」。

然後因為 LinkedIn 有設定 HSTS,所以使用者在界面上完全無法登入:

Google Chrome 上可以用 badidea 繞過 (參考「在 Google Chrome 連上因 HSTS 而無法連線的網站」),但在 Mozilla Firefox 上的話目前沒找到方法可以在界面上 bypass,而是需要改 SiteSecurityServiceState.txt 這個檔案:「HTTP Strict Transport Security prevents me from accessing a server that I'm doing development on」。

不過也因為兩個 cluster 獨立運作,網址改一下應該就會動了...

這幾年比較很少看到大公司出這種包,還蠻有趣的 XD

AWS 推出 Amazon GuardDuty 進行內部網路監控

AWS 推出 Amazon GuardDuty 監控內部網路:「Amazon GuardDuty – Continuous Security Monitoring & Threat Detection」。

從示意圖可以看到結合了許多 log 資料,然後綜合判斷:

In combination with information gleaned from your VPC Flow Logs, AWS CloudTrail Event Logs, and DNS logs, this allows GuardDuty to detect many different types of dangerous and mischievous behavior including probes for known vulnerabilities, port scans and probes, and access from unusual locations.

所以連 Bitcoin 相關網站也當作條件之一 XD

開了相當多區 (相較於之前 AWS Elemental MediaOOXX 系列...):

Amazon GuardDuty is available in production form in the US East (Northern Virginia), US East (Ohio), US West (Oregon), US West (Northern California), EU (Ireland), EU (Frankfurt), EU (London), South America (São Paulo), Canada (Central), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Mumbai) Regions and you can start using it today!

PChome 修正了問題,以及 RFC 4074 的說明

早些時候測試發現 PChome 已經修正了之前提到的問題:「PChome 24h 連線會慢的原因...」、「PChome 24h 連線會慢的原因... (續篇)」,這邊除了整理一下以外,也要修正之前文章裡的錯誤。

在 RFC 4074 (Common Misbehavior Against DNS Queries for IPv6 Addresses) 裡面提到了當你只有 IPv4 address 時,DNS server 要怎麼回應的問題。

在「3. Expected Behavior」說明了正確的作法,當只有 A RR 沒有 AAAA RR 的時候,應該要傳回 NOERROR,而 answer section 裡面不要放東西:

Suppose that an authoritative server has an A RR but has no AAAA RR for a host name. Then, the server should return a response to a query for an AAAA RR of the name with the response code (RCODE) being 0 (indicating no error) and with an empty answer section (see Sections 4.3.2 and 6.2.4 of [1]). Such a response indicates that there is at least one RR of a different type than AAAA for the queried name, and the stub resolver can then look for A RRs.

在「4.2. Return "Name Error"」裡提到,如果傳回 NXDOMAIN (3),表示查詢的這個名稱完全沒有 RR,而不僅僅限於 AAAA record,這就是我犯的錯誤 (在前面的文章建議傳回 NXDOMAIN):

This type of server returns a response with RCODE 3 ("Name Error") to a query for an AAAA RR, indicating that it does not have any RRs of any type for the queried name.

With this response, the stub resolver may immediately give up and never fall back. Even if the resolver retries with a query for an A RR, the negative response for the name has been cached in the caching server, and the caching server will simply return the negative response. As a result, the stub resolver considers this to be a fatal error in name resolution.

Several examples of this behavior are known to the authors. As of this writing, all have been fixed.

PChome 這次的修正回應了正確的值 (而不是我提到的 NXDOMAIN):

$ dig shopping.gs1.pchome.com.tw aaaa @ns1.gs1.pchome.com.tw

; <<>> DiG 9.9.5-3ubuntu0.16-Ubuntu <<>> shopping.gs1.pchome.com.tw aaaa @ns1.gs1.pchome.com.tw
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<<<- opcode: QUERY, status: NOERROR, id: 40767
;; flags: qr aa rd ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1280
;; QUESTION SECTION:
;shopping.gs1.pchome.com.tw.    IN      AAAA

;; AUTHORITY SECTION:
gs1.pchome.com.tw.      5       IN      SOA     ns1.gs1.pchome.com.tw. root.dns.pchome.com.tw. 20171123 3600 3 3600 5

;; Query time: 16 msec
;; SERVER: 210.242.216.91#53(210.242.216.91)
;; WHEN: Fri Nov 24 01:44:52 CST 2017
;; MSG SIZE  rcvd: 134

另外 RFC 也有一些其他的文件可以參考,像是 RFC 2308 (Negative Caching of DNS Queries (DNS NCACHE))、RFC 4697 (Observed DNS Resolution Misbehavior) 以及 RFC 8020 (NXDOMAIN: There Really Is Nothing Underneath),這些文件描述了蠻多常見的問題以及正確的處理方法,讀完對於現在愈來愈複雜的 DNS 架構有不少幫助。

PChome 24h 連線會慢的原因... (續篇)

上一篇「PChome 24h 連線會慢的原因...」寫到 DNS resolver 會倒在路邊,但沒寫會怎麼倒... 因為規格書上沒有寫當問不到要問的東西時要怎麼處理,所以每一家處理的方式都不太一樣。

我把對各 DNS resolver 查詢 100 次的結果放在 GitHub Gist 上:「Query 24h.pchome.com.tw」,大家都是回 SERVFAIL,只是時間不一樣 (最後一個 x.xxxx total 的部份表示實際秒數,wall clock)。

先看這次的主角好了,HiNet168.95.1.1168.95.192.1,同時也應該是 PChome 24h 服務使用人數最多的 DNS resolver。

這兩個 DNS resolver 在遇到問題時不會馬上回 SERVFAIL,加上業界有小道消息說中華自己改了不少 code,所以跟一般的 open source software 行為不太一樣。由於看不到 PChome 端的 DNS packet,所以只能就行為來猜... 應該是在第一輪都查不到後,會先 random sleep 一段時間,然後再去問一次,如果第二次還是失敗的話才回應 SERVFAIL

這個 random sleep 看起來可能是 10 秒,因為數據上看起來最長的時間就是這個了。

SEEDNet 的 139.175.1.1 以及 Google8.8.8.8 都沒這個問題,都會馬上回應 SERVFAIL

前陣子新出的 9.9.9.9 (參考「新的 DNS Resolver:9.9.9.9」) 則是有些特別的狀況,可以看到前面有三個 query 很慢 (第 2、3、5 三行),但後面的速度就正常了。可能是新加坡那邊有三台伺服器在服務 (目前我這邊測試的機器到 9.9.9.9 會到新加坡),在第一次遇到都沒有答案時會有特殊的演算法先確認,之後就會 cache 住?

所以各家 DNS resolver 反應都不太一樣,然後最大那家有問題 XD

24h.pchome.com.tw 慢一次,ecvip.pchome.com.tw 再慢一次,圖片的 a.ecimg.tw 再慢一次,一個頁面上多來幾個 domain 就會讓人受不了了 XD

其實我只要改成 8.8.8.8 或是改走 proxy.hinet.net 就可以解決啦,但還是寫下來吧 (抓頭)。

Archives