EU-US Privacy Shield 被歐盟法院拒絕

在「EU rejects US data sharing agreement over privacy concerns」這邊看到的新聞,引用自「EU rejects US data sharing agreement over privacy concerns」這邊的新聞報導。

歐盟最高法院的新聞稿則是在「The Court of Justice invalidates Decision 2016/1250 on the adequacy of the protection provided by the EU-US Data Protection Shield」這邊可以看到,雖然 EU-US Privacy Shield 被推翻,但本來在 2010 年的框架仍然有效:

However, it considers that Commission Decision 2010/87 on standard contractual clauses for the transfer of personal data to processors established in third countries is valid.

維基百科上的條目寫的比較簡單,主要是協議裡美國的保護機制不到歐盟的標準:

A final CJEU decision was published on 16 July 2020. The EU-US Privacy Shield for data sharing was struck down by the European Court of Justice on the grounds it did not provide adequate protections to EU citizens on government snooping.

記得這個戰了好久,最後在最高法院定案了...

第四堂:「Data Wrangling」

有陣子沒寫了,來還個債...

這個系列是從『MIT 的「The Missing Semester of Your CS Education」』這邊延伸出來的,這邊講的是「Data Wrangling」這篇。

這篇是在講 pipe 的用法,在講這些工具之前,其實有個很重要的概念應該要說明 (但沒有在這篇文章裡被提到),也就是 Unix philosophy,這個哲學是指 unix 環境下的工具,都會設計成只做好一件事情。

而要怎麼把這些工具串起來,最常見的就是 pipe,你可以在文章裡看到 grepsedsort 這些工具的用法,以及怎麼用 pipe 串起來。

這邊剛好也可以提一下,利用 pipe 可以把不同功能打散到不同的 process 上,剛好也可以稍微利用到現在常見的多 CPU 的環境。

另外上面因為提到了 grep,文章內花了不少篇幅在講 Regular expression 這個在 CS 課程裡面也是重要的基礎。

會放這種篇幅長度,一方面是 Regular expression 的實用性很高,另外一方面,學術上與自動機理論中的 DFANFA 都有關,算是學習計算理論的起點:

然後後面就有提到 AWK 這個工具,這邊要注意的是,雖然可以用 Perl 之類的工具作到類似的事情 (而且更強大),但 AWK 有被放到 POSIX 標準裡,所以在各種作業系統內幾乎都一定會出現,加上語法算是簡單,學起來還是很有幫助...

然後再最後面的段落冒出一個 gnuplot 畫個圖,以及示範 xargs 這種神器要怎麼用 (這邊會更建議看一下 manpage,可以配合 find 之類的工具用,並且平行化同時處理)。

然後最後示範了 binary data 怎麼處理。

疾管署的 COVID-19 每日送檢數的 Open Data

記者會上有提到現在疾管署的網站上有公開每日送檢數的資料,花了些時間找,在「台灣COVID-19冠狀病毒檢測每日送驗數」這邊可以看到,網站提供的 preview 的界面沒辦法看到最新的資料,但下載後可以看到檔案格式是 UTF-8 的 CSV 檔,應該還算能處理...

找到這個資料花了一些功夫 (因為用 DuckDuckGoGoogle 都沒直接找到),後來是靠這樣的步驟找到的:

本來點選熱門資料那邊的「COVID-19台灣最新病例、檢驗統計」結果發現只有一筆資料,而且看起來最後更新時間是 2020/04/24,所以得往其他地方翻。

首先點了上面的「最新消息」發現是個系統公告區,不是我要的,接下來才又找到正確的路線...

這時候就會看到最前面提到的「台灣COVID-19冠狀病毒檢測每日送驗數」了。

然後 data.cdc.gov.tw 這個網站看起來是放在 Microsoft Azure 的日本區?

歐盟更新了對於 Cookie 同意方式的準則

TechCrunch 上面看到的,歐盟更新了對於 Cookie 同意方式的準則:「No cookie consent walls — and no, scrolling isn’t consent, says EU data protection body」,英文版的 PDF 文件可以在「Guidelines 05/2020 on consent under Regulation 2016/679」這邊看到。

這篇準則主要是在說明,什麼情境下取得的「同意」才是有效的。主要在在說明使用者與開發者權力不對等的情況下,GDPR 會擋下哪些對使用者不利的情況。

準則文件裡開頭的地方先解釋了什麼是 free/freely given,然後給了不少範例,另外翻例子的時候還看到在雇傭關係下因為員工有無法拒絕的壓力,這時候的同意也未必是有效的,藉以保護員工...

而 TechCrunch 的文章則是拉出了兩個目前在 internet 上很常用的情況來報導 (cookie wall 與 scrolling),解釋現在 internet 上面常用的這些方法在 GDPR 下並沒有取得授權。

這樣的話 Medium 的 login wall 應該也會踩到 (強迫你要註冊 Medium 才能看,這邊會需要同意 Medium 的使用條款),這次歐盟文件算是蠻清楚的,多幾次訴訟,再讓 GDPR 跑個幾年,應該有會有不同的方法了...

Brave 出手檢舉 Google 沒有遵守 GDPR

Brave (從 Chromium 分支出來的瀏覽器) 檢舉 Google 沒有遵守 GDPR 的規定:「Formal GDPR complaint against Google’s internal data free-for-all」。

主要是「purpose limitation」這個部份,出自「REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016」:

1. Personal data shall be:

(b)

collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes shall, in accordance with Article 89(1), not be considered to be incompatible with the initial purposes (‘purpose limitation’);

比較重要的是 specified 與 explicit 這兩個詞,GDPR 規定必須明確指明用途,而可以從整理出來的文件「Inside the black box」裡的「Purported processing purpose」看到大量的極為廣泛的說明。

Google 應該會就這塊反擊認為這樣的描述就夠用,就看歐盟決定要怎麼做了...

Google 用 x-client-data 追蹤使用者的問題

前陣子 Chromium 團隊在研究要移除 User-Agent 字串的事情 (參考「User-Agent 的淘汰提案」),結果 kiwibrowser 就直接炸下去,Google 很久前就會針對自家網站送出 x-client-data 這個 HTTP header,裡面足以辨識使用者瀏覽器的單一性:「Partial freezing of the User-Agent string#467」。

Google 的白皮書裡面是說用在 server 的試驗:

We want to build features that users want, so a subset of users may get a sneak peek at new functionality being tested before it’s launched to the world at large. A list of field trials that are currently active on your installation of Chrome will be included in all requests sent to Google. This Chrome-Variations header (X-Client-Data) will not contain any personally identifiable information, and will only describe the state of the installation of Chrome itself, including active variations, as well as server-side experiments that may affect the installation.

The variations active for a given installation are determined by a seed number which is randomly selected on first run. If usage statistics and crash reports are disabled, this number is chosen between 0 and 7999 (13 bits of entropy). If you would like to reset your variations seed, run Chrome with the command line flag “--reset-variation-state”. Experiments may be further limited by country (determined by your IP address), operating system, Chrome version and other parameters.

但因為這個預設值開啟的關係,就算關掉後也足以把使用者再分類到另外一個區塊,仍然具有高度辨識性,不是你 Google 說無法辨識就算數。

另外如果看 source code 裡的說明:

    // Note the criteria for attaching client experiment headers:
    // 1. We only transmit to Google owned domains which can evaluate
    // experiments.
    //    1a. These include hosts which have a standard postfix such as:
    //         *.doubleclick.net or *.googlesyndication.com or
    //         exactly www.googleadservices.com or
    //         international TLD domains *.google. or *.youtube..
    // 2. Only transmit for non-Incognito profiles.
    // 3. For the X-Client-Data header, only include non-empty variation IDs.

可以看到 *.doubleclick.net*.googlesyndication.comwww.googleadservices.com 全部都是廣告相關,另外 Google 自家搜尋引擎是直接提供廣告 (不透過前面提到的網域),YouTube 也是一樣的情況,所以完全可以猜測 x-client-data 這個資料就是用在廣告相關的系統上。

The Register 在「Is Chrome really secretly stalking you across Google sites using per-install ID numbers? We reveal the truth」這邊用粗體的 Update 提到了 GDPR 的問題,不確定是不是開始有單位在調查了:

Updated Google is potentially facing a massive privacy and GDPR row over Chrome sending per-installation ID numbers to the mothership.

在這個問題沒修正之前,只能暫時用操作 HTTP header 的 extension 移掉這個欄位。

Python 3.7+ 保證 dict 內容的順序

在「Dicts are now ordered, get used to it」這邊看到的,因為 Python 官方 (也就是 CPython) 實做 dict 的方式改變,然後決定把這個特性當作是 social contract,而不是當作 side effect 的特性 (也就是不保證之後版本會有相同特性)。

Changed in version 3.7: Dictionary order is guaranteed to be insertion order. This behavior was an implementation detail of CPython from 3.6.

作者裡面的兩張圖清楚表示出來以前的版本怎麼實做,與 3.7+ 的版本怎麼實做:

這樣就很好理解了。

不過考慮到還是有些系統用 Python 3.5 (像是 Ubuntu 16.04 內建的 python3) 與 Python 3.6 (Ubuntu 18.04 內建的 python3,雖然沒問題,但當時還沒有寫出來),也許還是先不要依賴這個行為會比較好。

不過以插入的順序列出好像不是很常用到...

Avast 與 Jumpshot 販賣使用者瀏覽記錄與行為

過了一陣子了,可以整理一下資料記錄起來...

報導可以看 PCMag 的「The Cost of Avast's Free Antivirus: Companies Can Spy on Your Clicks」與 Motherboard (VICE) 的「Leaked Documents Expose the Secretive Market for Your Web Browsing Data」這兩篇,大綱先把重點列出來了,Avast 在賣使用者的瀏覽記錄與行為:

Avast is harvesting users' browser histories on the pretext that the data has been 'de-identified,' thus protecting your privacy. But the data, which is being sold to third parties, can be linked back to people's real identities, exposing every click and search they've made.

Avast 利用免費的防毒軟體,蒐集使用者的瀏覽記錄與行為,然後透過 Jumpshot 這家子公司販賣出去:

The Avast division charged with selling the data is Jumpshot, a company subsidiary that's been offering access to user traffic from 100 million devices, including PCs and phones.

算是「免費的最貴」的標準型。另外比較有趣的是「資料賣給了誰」這件事情:

Who else might have access to Jumpshot's data remains unclear. The company's website says it's worked with other brands, including IBM, Microsoft, and Google. However, Microsoft said it has no current relationship with Jumpshot. IBM, on the other hand, has "no record" of being a client of either Avast or Jumpshot. Google did not respond to a request for comment.

Microsoft 說「現在沒有關係」,IBM 說「沒有 client 的記錄」,Google 則是不回應。

然後配合解釋資料長什麼樣子,以及可以怎麼用:

For instance, a single click can theoretically look like this:

Device ID: abc123x Date: 2019/12/01 Hour Minute Second: 12:03:05 Domain: Amazon.com Product: Apple iPad Pro 10.5 - 2017 Model - 256GB, Rose Gold Behavior: Add to Cart

At first glance, the click looks harmless. You can't pin it to an exact user. That is, unless you're Amazon.com, which could easily figure out which Amazon user bought an iPad Pro at 12:03:05 on Dec. 1, 2019. Suddenly, device ID: 123abcx is a known user. And whatever else Jumpshot has on 123abcx's activity—from other e-commerce purchases to Google searches—is no longer anonymous.

所以,如果 Google 手上有這個資料,就可以交叉比對自家的記錄,然後得到使用者完整的記錄。

在消息一公開後沒多久後,Avast 就宣佈關閉 Jumpshot,感覺連被抓包後的反應動作都超流暢,一臉就是排練過:「A message from Avast CEO Ondrej Vlcek」。

看了一下,Avast 旗下還有 AVG,還有個 VPN 服務...

比 Bloom filter 與 Cuckoo filter 再更進一步的 Xor filter

Bloom filter 算是教科書上的經典演算法之一,在實際應用上有更好的選擇,像是先前提到的 Cuckoo filter:「Cuckoo Filter:比 Bloom Filter 多了 Delete」。

現在又有人提出新的資料結構,號稱又比 Bloom filter 與 Cuckoo filter 好:「Xor Filters: Faster and Smaller Than Bloom Filters」。

不過並不是完全超越,其中馬上可以看到的差異就是不支援 delete:

Deletions are generally unsafe with these filters even in principle because they track hash values and cannot deal with collisions without access to the object data: if you have two objects mapping to the same hash value, and you have a filter on hash values, it is going to be difficult to delete one without the other.

論文的預印本可以在 arXiv 上下載:「Xor Filters: Faster and Smaller Than Bloom and Cuckoo Filters」。

省頻寬的方法:終極版本...

看到「Three ways to reduce the costs of your HTTP(S) API on AWS」這邊介紹在 AWS 上省頻寬費用的方法,看了只能一直笑 XD

第一個是降低 HTTP response 裡沒有用到的 header,因為每天有五十億個 HTTP request,所以只要省 1byte 就是省下 USD$0.25/day:

Since we would send this five billion times per day, every byte we could shave off would save five gigabytes of outgoing data, for a saving of 25 cents per day per byte removed.

然後調了一些參數後省下 USD$1,500/month:

Sending 109 bytes instead of 333 means saving $56 per day, or a bit over $1,500 per month.

第二個是想辦法在 TLS 這邊下手,一開始其中一個方向是利用 TLS session resumption 降低第二次連線的成本,但他們發現沒有什麼參數可以調整:

One thing that reduces handshake transfer size is TLS session resumption. Basically, when a client connects to the service for the second time, it can ask the server to resume the previous TLS session instead of starting a new one, meaning that it doesn’t have to send the certificate again. By looking at access logs, we found that 11% of requests were using a reused TLS session. However, we have a very diverse set of clients that we don’t have much control over, and we also couldn’t find any settings for the AWS Application Load Balancer for session cache size or similar, so there isn’t really anything we can do to affect this.

所以改成把 idle 時間拉長 (避免重新連線):

That leaves reducing the number of handshakes required by reducing the number of connections that the clients need to establish. The default setting for AWS load balancers is to close idle connections after 60 seconds, but it seems to be beneficial to raise this to 10 minutes. This reduced data transfer costs by an additional 8%.

再來是 AWS 本身發的 SSL certification 太肥,所以他們換成 DigiCert 發的,大幅降低憑證本身的大小,反而省下 USD$200/day:

So given that the clients establish approximately two billion connections per day, we’d expect to save four terabytes of outgoing data every day. The actual savings were closer to three terabytes, but this still reduced data transfer costs for a typical day by almost $200.

這些方法真的是頗有趣的 XDDD

不過這些方法也是在想辦法壓榨降低與 client 之間的傳輸量啦,比起成本來說反而是提昇網路反應速度...