Salesforce 打算買 Tableau

Saleforce 打算要買下 Tableau...

兩邊簽了協議並且都發了新聞稿出來,分別是 Salesforce 的:「Salesforce Signs Definitive Agreement to Acquire Tableau」,以及 Tableau 的:「Salesforce Signs Definitive Agreement to Acquire Tableau」。

這兩家都是掛在 NYSE 的公司,而這次的併購會直接以股票交易 (Tableau 的 1 股換 Salesforce 的 1.103 股,看起來是基於這三天股價計算):

SAN FRANCISCO, Calif. and SEATTLE, Wash. — June 10, 2019 -- Salesforce (NYSE:CRM), the global leader in CRM, and Tableau Software (NYSE: DATA), the leading analytics platform, have entered into a definitive agreement under which Salesforce will acquire Tableau in an all-stock transaction, pursuant to which each share of Tableau Class A and Class B common stock will be exchanged for 1.103 shares of Salesforce common stock, representing an enterprise value of $15.7 billion (net of cash), based on the trailing 3-day volume weighted average price of Salesforce's shares as of June 7, 2019.

Salesforce 想要買 Tableau 是個蠻意外的新聞...

Apple 新的「Find My」帶來的隱私問題

這次 WWDC 推出的新功能,已經有人在討論機制與隱私問題了:「How does Apple (privately) find your offline devices?」。

前一代的「Find my iPhone」需要透過網路與 GPS 資料才能在系統上看到,這一代則是加上 BLE beacon,然後任何一台 iOS device 收到後就回傳回給蘋果:

Every active iPhone will continuously monitor for BLE beacon messages that might be coming from a lost device. When it picks up one of these signals, the participating phone tags the data with its own current GPS location; then it sends the whole package up to Apple’s servers.

幾個隱私問題在於,代傳的 iOS device 也會暴露位置資訊給蘋果,另外收到 BLE beacon 的 iOS device 本身是否可以解讀遺失機器的資訊?而商家看起來也可以利用這個方式主動發送攻擊而得知不少資料 (像是文章裡提到先前蘋果透過 randomize mac address 加強隱私的問題,這邊又多開了一個洞),現在蘋果給的資訊還不夠清楚,需要真的逆向工程確認才知道...

Twitter 對 2x 與 3x 的圖片的研究...

所以發現很多時候用 2x 的圖片就夠了?:「Capping image fidelity on ultra-high resolution devices」。


The most modern screens are OLED. These screens boast some really great features like pure blacks, and are marketed as 3x scale. However, nearly no "3x scale" OLED actually has perfect 3x3 pixels per dot on their screen.

因為螢幕不是真的到 3x 的要求,丟 2x 的圖片出去就好,省頻寬又省下載時間:

This means that most OLED screens that say they are 3x resolution, are actually 3x in the green color, but only 1.5x in the red and blue colors. Showing a 3x resolution image in the app vs a 2x resolution image will be visually the same, though the 3x image takes significantly more data. Even true 3x resolution screens are wasteful as the human eye cannot see that level of detail without something like a magnifying glass.

省下 38% 的資料量,32% 的時間:

There's no difference that the human eye can see, but will save 38% on data and 32% on latency on the capped image load for this particular example which is reflective of most images that load on Twitter.

這也另外帶出了其他的想法,如果沒有太多時間研究的話,可以考慮先提供 2x 的就好,不需要特地做 3x 的版本...

GrabFood 用定位資料修正餐廳的資訊

Grab 的「How we harnessed the wisdom of crowds to improve restaurant location accuracy」這篇是他們的 data team 整理出來,如何使用既有的資料快速的修正餐廳資訊。裡面提到的方法不需要用到 machine learning,光是一些簡單的統計算法就可以快速修正現有的架構。

這些資訊其實是透過司機用的 driver app 蒐集來的,在 driver app 上有大量的資訊傳回伺服器 (像是定時回報的 GPS 位置,以及取餐狀態),而這些司機因為地緣關係,腦袋裡的資訊比地圖會準不少:

One of the biggest advantages we have is the huge driver-partner fleet we have on the ground in cities across Southeast Asia. They know the roads and cities like the back of their hand, and they are resourceful. As a result, they are often able to find the restaurants and complete orders even if the location was registered incorrectly.


Fraction of the orders where the pick-up location was not “at” the restaurant: This fraction indicates the number of orders with a pick-up location not near the registered restaurant location (with near being defined both spatially and temporally as above). A higher value indicates a higher likelihood of the restaurant not being in the registered location subject to order volume

Median distance between registered and estimated locations: This factor is used to rank restaurants by a notion of “importance”. A restaurant which is just outside the fixed radius from above can be addressed after another restaurant which is a kilometer away.

另外也有不少其他的改善 (像是必須在離餐聽某個距離內才能點「取餐」,這個「距離」會因為餐聽可能在室內商場而需要的調整),整個成果就會反應在訂單的取消率大幅下降:

整體看起來是系統產生清單後讓人工後續處理 (像是打電話去店家問?),但這個方式所提供的清單準確度應該很高 (因為司機不會沒事跟自己時間過不去,跑到奇怪地方按下取餐),用這些資料跑簡單的演算法就能夠快速修正不少問題...

PostgreSQL 裡的 B-tree 結構

在「Indexes in PostgreSQL — 4 (Btree)」這邊看到講 PostgreSQLB-tree 結構以及常見的查詢會怎麼使用 B-tree。

裡面講了三種查詢,第一種是等號的查詢 (Search by equality),第二種是不等號的查詢 (Search by inequality),第三種是範圍的查詢 (Search by range)。再後面講到排序與 index 的用法。

雖然是分析 PostgreSQL,但裡面是一般性的概念,其他使用 B-tree 結構的資料庫也是類似作法...


2016 年的文章,不過算是經典的題目,所以最近又冒出來了。要怎麼找數列的平均值:「Calculating the mean of a list of numbers」。

You have a list of floating point numbers. No nasty tricks - these aren’t NaN or Infinity, just normal “simple” floating point numbers.

Now: Calculate the mean (average). Can you do it?

你有一串浮點數 (沒有 NaN 與 Infinity),要怎麼找出平均值。要考慮的包括:

  • 第一個要處理的就是設計演算法時各種會 overflow 的情況。
  • 降低誤差。
  • 合理的計算量。

好像很適合拿來 data team 面試時互相討論的題目?因為「平均值」是個商業上本來就有意義的指標,而且從 time-series events 灌進來的資料量有機會產生各種 overflow 情境,或是精確度問題,所以這個問題其實是個在真實世界上會遇到的情境。

想了一下,如果是 integer 的確是簡單很多 (可以算出正確的值),但如果是 float 類型真的難很多:

It also demonstrates a problem: Floating point mathematics is very hard, and this makes it somewhat unsuitable for testing with Hypothesis.

馬上想到的地雷是在 IEEE 754 的 float 世界裡,2^24 + 1 還是 2^24

#include <math.h>
#include <stdio.h>

int main(void)
    int i;
    float a;

    for (i = 0; i < 32; i++) {
        a = pow(2, i);
        printf("2^%d     = %f\n", i, a);

        a += 1;
        printf("2^%d + 1 = %f\n", i, a);


2^23     = 8388608.000000
2^23 + 1 = 8388609.000000
2^24     = 16777216.000000
2^24 + 1 = 16777216.000000

YAML 的痛點

Changelog 上看到「In defense of YAML」這篇講 YAML 的問題,裡面是引用「In Defense of YAML」這篇文章。

未必全盤接受文章裡面的說法,但裡面提到的兩個點的確是痛點,第一個是空白 (或者說 indent),第二格式特殊語法。這兩個是用 YAML 時都很頭痛的問題:

Whitespace is a minefield. Its syntax is surprisingly complex.

就像 JavaScript 的 == 一樣 (我指的是之前寫的「JavaScript 的 == 條列式比較」這篇),你可以把定義背下來,但你會覺得沒什麼道理可言而有種無奈的感覺...

文章裡也有提到 JSON 內沒有 comment 的設計的確是用起來比較無奈的地方...

Facebook 員工爆料內部密碼存了明碼

Krebs on Security 這邊看到的:「Facebook Stored Hundreds of Millions of User Passwords in Plain Text for Years」,Facebook 官方的回應在「Keeping Passwords Secure」這邊。

幾個重點,第一個是範圍,目前已經有看到 2012 的資料都有在內:

The Facebook source said the investigation so far indicates between 200 million and 600 million Facebook users may have had their account passwords stored in plain text and searchable by more than 20,000 Facebook employees. The source said Facebook is still trying to determine how many passwords were exposed and for how long, but so far the inquiry has uncovered archives with plain text user passwords dating back to 2012.

另外的重點是這些資料已經被內部拿來大量搜尋 (喔喔):

My Facebook insider said access logs showed some 2,000 engineers or developers made approximately nine million internal queries for data elements that contained plain text user passwords.

另外是 Legal 與 PR 都已經啟動處理了,對外新聞稿會美化數字,降低傷害:

“The longer we go into this analysis the more comfortable the legal people [at Facebook] are going with the lower bounds” of affected users, the source said. “Right now they’re working on an effort to reduce that number even more by only counting things we have currently in our data warehouse.”


Renfro said the company planned to alert affected Facebook users, but that no password resets would be required.

去年的另外一則新聞可以交叉看:「Facebook’s security chief is leaving, and no one’s going to replace him」:

Instead of building out a dedicated security team, Facebook has dissolved it and is instead embedding security engineers within its other divisions. “We are not naming a new CSO, since earlier this year we embedded our security engineers, analysts, investigators, and other specialists in our product and engineering teams to better address the emerging security threats we face,” a Facebook spokesman said in an email. Facebook will “continue to evaluate what kind of structure works best” to protect users’ security, he said.

看起來又要再換一次密碼了... (還好已經習慣用 Password Manager,所以每個站都有不同密碼?)

喔對,另外補充一個概念,當他們說「我們沒有證據有人存取了...」的時候,比較正確的表達應該是「我們沒有稽核這塊... 所以沒有證據」。

在 Terminal 看資料的 VisiData

在「VisiData」這篇看到的專案,專案的頁面在「A Swiss Army Chainsaw for Data」這邊,從 screenshot 可以看出來是 terminal 的檢視工具:

會注意到是因為支援 .xls

explore new datasets effortlessly, no matter the format: vd foo.json bar.csv baz.xls

SUPPORTED SOURCES 這邊可以看到完整的支援清單,居然連 pcap 也支援,不知道看起來如何 :o

PostgreSQL 對 fsync() 的修正

上次寫了「PostgreSQL 對 fsync() 的行為傷腦筋...」提到 fsync() 有些地方是與開發者預期不同的問題,但後面忘記跟進度...

剛剛看到 Percona 的人寫了「PostgreSQL fsync Failure Fixed – Minor Versions Released Feb 14, 2019」這篇才發現在 2/14 就出了對應的更新,從 release notes 也可以看到:

By default, panic instead of retrying after fsync() failure, to avoid possible data corruption (Craig Ringer, Thomas Munro)

Some popular operating systems discard kernel data buffers when unable to write them out, reporting this as fsync() failure. If we reissue the fsync() request it will succeed, but in fact the data has been lost, so continuing risks database corruption. By raising a panic condition instead, we can replay from WAL, which may contain the only remaining copy of the data in such a situation. While this is surely ugly and inefficient, there are few alternatives, and fortunately the case happens very rarely.

A new server parameter data_sync_retry has been added to control this; if you are certain that your kernel does not discard dirty data buffers in such scenarios, you can set data_sync_retry to on to restore the old behavior.

現在的 workaround 是遇到 fsync() 失敗時為了避免 data corruption,會直接 panic 讓整個 PostgreSQL 從 WAL replay 記錄,也代表 HA 機制 (如果有設計的話) 有機會因為這個原因被觸發...

不過也另外設計了 data_sync_retry,讓 PostgreSQL 的管理者可以硬把這個 panic 行為關掉,改讓 PostgreSQL 重新試著 fsync(),這應該是在之後 kernel 有修改時會用到...