蘋果也搞了個 Applebot 掃資料

Hacker News Daily 上翻到的:「About Applebot」,另外 Hacker News 上的討論也蠻有趣的,可以看看:「Applebot (support.apple.com)」。

目前的用途是用在 Siri 之類的 bot:

Applebot is the web crawler for Apple. Products like Siri and Spotlight Suggestions use Applebot.

裡面有提到辨識方式,IP 會使用 17.0.0.0/8,反解會是 *.applebot.apple.com

Traffic coming from Applebot is identified by its user agent, and reverse DNS shows it in the *.applebot.apple.com domain, originating from the 17.0.0.0 net block.

另外 User-agent 也可以看出:

Mozilla/5.0 (Device; OS_version) AppleWebKit/WebKit_version (KHTML, like Gecko) Version/Safari_version Safari/WebKit_version (Applebot/Applebot_version)

後面有提到 search engine 的部份 (About search rankings),這點讓大家在猜蘋果是不是開始在弄 search engine 了,在 Hacker News 上的討論串裡面可以看到不少對於蘋果自己搞 search engine 的猜測。

然後也有些頗有趣的,像是爆料當初開發的過程遇到的問題 XD

jd20 3 days ago [–]

Some fun facts:
- Applebot was originally written in Go (and uncovered a user agent bug on redirects, revealing it's Go origins to the world, which Russ Cox fixed the next day).

- Up until the release of iOS 9, Applebot ran entirely on four Mac Pro's in an office. Those four Mac Pro's could crawl close to 1B web pages a day.

- In it's first week of existence, it nearly took Apple's internal DNS servers offline. It was then modified to do it's own DNS resolution and caching, fond memories...

Source: I worked on the original version.

最近討論到的二分搜尋法...

應該是直接在 Hacker News 上看到的東西,有人丟出一個二分搜尋法實做,宣稱比標準實做快不少:「Binary Search: A new implementation that is up to 25% faster (github.com)」。

實做的程式碼放在 GitHub 的「scandum/binary_search」這邊,讀了 source code 後可以看到一臉就要利用現代 CPU 預測平行化的能力加速 XDDD

另外看了 Hacker News 上的討論,這種寫法會透過 CPU 預測平行化的能力善用記憶體頻寬,這應該是測起來比較快的主因。

不過這只算是個開頭,丟出一些方向讓社群可以研究,實際上還是得看看負面影響的部份,像是比較舊的 CPU 會不會有很重的 penalty (overhead),以及其他類型 CPU 上的情況...

Google 的搜尋廣告改版造成的混淆

Google 的搜尋廣告最近改版了,在 The Verge 的「Google’s ads just look like search results now」這邊可以看到報導以及 screenshot:

可以看到廣告的標示變成 favicon 了,使得使用者更容易誤會是搜尋內容。而這也使得廣告的點閱比例大幅提昇,像是「Google’s latest search results change further blurs what’s an ad」這邊提到的:

For all four clients (a local health care company, two business-to-business companies and an e-commerce company), the desktop click-through rates increased and ranged from 4% to 10.5%. All clients had slight declines in the click-through rates on mobile devices.

The Verge 後續也分析了這個改變帶來的反思:「How much longer will we trust Google’s search results?」。

我的建議是 uBlock Origin 當作基本工具 (在各瀏覽器上應該都有支援),另外進階一些可以用 DuckDuckGo 看看,但不保證搜尋品質會讓你滿意...

企業內的文件搜尋系統 Amazon Kendra

AWS 推出了具有語意分析的能力,可以直接丟自然語言進去搜尋的 Amazon Kendra:「Announcing Amazon Kendra: Reinventing Enterprise Search with Machine Learning」。

之前 Google 有推出過 Google Search Appliance 也是做企業內資料的整合 (2016 年收掉了),但應該沒有到可以用自然語言搜尋?

Amazon Kendra 的費用不算便宜,Enterprise Edition 提供 150GB 的容量與 50 萬筆文件,然後提供大約 40k query/day,這樣要 USD$7/hr,一個月大約是 USD$5,040,不過對於企業來說應該是很有用...

另外有提到這邊 query 收費的部份是估算,會依照 query 問題的難易度而不同:

Actual queries per day will vary based on query complexity, which greatly varies from customer to customer. Less complex queries (e.g. “leave policy”) consume less resources to run, and more complex queries (e.g. “What’s the daily parking allowance in Seattle?”) consume more resources to run. The total number of queries you can run with your allocated resources will depend on your mix of queries. The max queries per day provided above is an estimate, assuming 80% less complex queries and 20% more complex queries.

這樣頗有趣的,感覺可以處理簡單的分析了?

Amazon Elasticsearch Service 可以利用 S3 當作二級儲存空間了

Amazon Elasticsearch Service 的新功能,使用 Amazon S3 當作第二級儲存空間 (UltraWarm):「Announcing UltraWarm (Preview) for Amazon Elasticsearch Service」。

UltraWarm 需要不同的機器 (跑不同版本?),機器的規格 (vCPU 與記憶體的比率) 接近 Memory Optimized 的版本,但是貴了不少,所以需要夠大的資料量才會打平回來...

us-east-1 來看,SSD EBS 的空間成本就是 USD$0.135/GB,而傳統磁性硬碟是 USD$0.067/GB (不知道收不收 I/O 費用?),但 storage 的價錢是 USD$0.024/GB。這邊值得一提的是 Amazon S3 是 USD$0.023/GB,看起來是直接包括了 API 的呼叫費用?

Startpage 被廣告公司收購

Hacker News 上看到 Reddit 上的消息 (看起來有陣子了):「Startpage is now owned by an advertising company」。

Startpage 算是之前有在用的 default search engine,但發現有很多 bug 後就不太用了。目前還是先設 DuckDuckGo,然後在需要的時候用之前寫的 press-g-to-google-duckduckgo 切到 Google 去找...

DuckDuckGo 還是有搜尋品質的問題...

hiQ 爬 LinkedIn 資料的無罪判決

hiQ 之前爬 LinkedIn 的公開資料而被 LinkedIn 告 (可以參考 2017 時的「hiQ prevails / LinkedIn must allow scraping / Of your page info」),這場官司一路打官司打到第九巡迴庭,最後的判決確認了 LinkedIn 完全敗訴。判決書在「HIQ LABS V. LINKEDIN」這邊可以看到。

這次的判決書有提到當初地方法院有下令 LinkedIn 不得用任何方式設限抓取公開資料:

The district court granted hiQ’s motion. It ordered LinkedIn to withdraw its cease-and-desist letter, to remove any existing technical barriers to hiQ’s access to public profiles, and to refrain from putting in place any legal or technical measures with the effect of blocking hiQ’s access to public profiles. LinkedIn timely appealed.

而在判決書裡其他地方也可以看到巡迴庭不斷確認地方法院當時的判決是合理的,並且否定 LinkedIn 的辯解:(這邊只拉了兩段,裡面還有提到很多次)

In short, the district court did not abuse its discretion in concluding on the preliminary injunction record that hiQ currently has no viable way to remain in business other than using LinkedIn public profile data for its Keeper and Skill Mapper services, and that HiQ therefore has demonstrated a likelihood of irreparable harm absent a preliminary injunction.

We conclude that the district court’s determination that the balance of hardships tips sharply in hiQ’s favor is not “illogical, implausible, or without support in the record.” Kelly, 878 F.3d at 713.

到巡迴庭差不多是確定的判決了,沒有其他特別的流程的話...

Facebook 修正錯字的新演算法

先前 Facebook 已經先發表過 fastText 了,在這個月的月初又發表了另外一個演算法 Misspelling Oblivious Embeddings (MOE),是搭著本來的 fastText 而得到的改善:「A new model for word embeddings that are resilient to misspellings」。

Facebook 的說明提到在 user-generated text 的內容上,MOE 的效果比 fastText 好:

We checked the effectiveness of this approach considering different intrinsic and extrinsic tasks, and found that MOE outperforms fastText for user-generated text.

論文發表在 arXiv 上:「Misspelling Oblivious Word Embeddings」。

依照介紹,fastText 的重點在於 semantic loss,而 MOE 則多了 spell correction loss:

The loss function of fastText aims to more closely embed words that occur in the same context. We call this semantic loss. In addition to the semantic loss, MOE also considers an additional supervisedloss that we call spell correction loss. The spell correction loss aims to embed misspellings close to their correct versions by minimizing the weighted sum of semantic loss and spell correction loss.

不過目前 GitHub 上的 facebookresearch/moe 只有放 dataset,沒有 open source 出來讓人直接用,可能得自己刻...

透過 Avast 防毒軟體蒐集資料的 Jumpshot

看到「Less than Half of Google Searches Now Result in a Click」這篇,在說明 Google 的搜尋結果頁面內的行為大幅偏頗 Google 自家服務的問題,這個問題最近幾個禮拜開始紅了起來...

但另外一點值得注意的是裡面提到 Jumpshot 這個服務可以分析使用者的頁面以及行為這件事情...

在 2013 年 Avast 買下 Jumpshot:「AVAST Software Acquires Jumpshot to Work Magic Against Slow PC Performance」,當時的目標是效能:

Having served as PC tech consultants to their friends and family, their goal was to build a product to help less tech-savvy PC users optimize and tune up their PC performance, cleaning it from unpleasant toolbars and junk software.

但在 2015 年的時候就可以看到 Avast 在他們自家的論壇上有說明,Avast 會收資料丟進 Jumpshot:「Avast and Jumpshot」。

These aggregated results are the only thing that Avast makes available to Jumpshot customers and end users.

而藉由這些資料而提供服務。