Python 2 的後續支援

雖然 2020/01/01 開始 Python 2 就沒有官方支援了 (翻了一下資料發現「Python 2 series to be retired by April 2020」這篇,官方好像延到四月了...),不過剛剛看到這則新聞,裡面提到了商業支援:「Snakes on a wane: Python 2 development is finally frozen in time, version 3 slithers on」。

在最後一段:

Those catering to corporate clients intend to continue support Python 2.7 for a while. In October, Red Hat said it will stop supporting Python 2.7 in RHEL 8 come June 2024.

不確定這些 patch 會不會也移植到 CentOS 上,如果會的話,至少有一個地方可以讓你多四年掙扎,感覺上可以用 container 生一個獨立的環境...

另外 Ubuntu 這邊的 LTS 不知道有什麼方案,已知 20.04 的目標是要移除掉,但 16.04 與 18.04 裡的 Python 2 如果有問題時,不知道會不會收 patch...

再來就是 pyenv 了,翻了一下目前的情況,好像就是放著,不知道會有什麼方案搬出來...

比 Bloom filter 與 Cuckoo filter 再更進一步的 Xor filter

Bloom filter 算是教科書上的經典演算法之一,在實際應用上有更好的選擇,像是先前提到的 Cuckoo filter:「Cuckoo Filter:比 Bloom Filter 多了 Delete」。

現在又有人提出新的資料結構,號稱又比 Bloom filter 與 Cuckoo filter 好:「Xor Filters: Faster and Smaller Than Bloom Filters」。

不過並不是完全超越,其中馬上可以看到的差異就是不支援 delete:

Deletions are generally unsafe with these filters even in principle because they track hash values and cannot deal with collisions without access to the object data: if you have two objects mapping to the same hash value, and you have a filter on hash values, it is going to be difficult to delete one without the other.

論文的預印本可以在 arXiv 上下載:「Xor Filters: Faster and Smaller Than Bloom and Cuckoo Filters」。

pipenv 的凋零與替代方案 poetry

看到「If this project is dead, just tell us」這個,裡面討論到了 pipenv 的前景愈來愈讓人疑惑,要官方給個意見,結果下面就馬上有人建議跳槽到 poetry

剛好前陣子看到另外一篇文章「My Python Development Environment, 2020 Edition」,裡面也提到了他把 pipenv 換成 poetry,而且提到了 pipenv 的問題:

pipenv → poetry. This move’s more complex. I stoped using Pipenv for a couple of reasons:

  • Governance: the lead of Pipenv was someone with a history of not treating his collaborators well. That gave me some serious concerns about the future of the project, and of my ability to get bugs fixed.
  • Bugs and rapid API changes. About a year ago, Pipenv had lots of bugs, and a rapid pace of change introducing or changing APIs. I ran into minor issues at least once a week. Nothing was seriously bad, but it generally felt fairly unstable. I kept having to update various automated deploy workflows to work around issues or changes to Pipenv.

看起來 tech stack 裡面要把 pipenv 轉移抽離了...

一個超小的 HTTP Server Library

httpserver.h 這個專案是用 C 寫的,就一個 .h 檔,從範例可以看到用法不算太複雜:

#define HTTPSERVER_IMPL
#include "httpserver.h"

#define RESPONSE "Hello, World!"

void handle_request(struct http_request_s* request) {
  struct http_response_s* response = http_response_init();
  http_response_status(response, 200);
  http_response_header(response, "Content-Type", "text/plain");
  http_response_body(response, RESPONSE, sizeof(RESPONSE) - 1);
  http_respond(request, response);
}

int main() {
  struct http_server_s* server = http_server_init(8080, handle_request);
  http_server_listen(server);
}

然後同時支援 epollkqueue。拿來寫小東西還蠻有趣的,不過如果複雜一點的東西還是會考慮其他的框架就是了,畢竟會 blocking 的東西太多了...

cdnjs 轉移到 Cloudflare 負責維護

不確定是 cdnjs 還是 CDNJS,因為官方網站是小寫,但 GitHub 上是大寫...

Anyway,cdnjs 本來由社群維護更新 (實際上是透過 bot 更新,但 bot 本身也需要維護),因為人力時間的因素,轉移給 Cloudflare 負責了:「An Update on CDNJS」。

這次也更新了 cdnjs 的 daily request 數量,可以看到現在大約是每天六十億次:

本來 Cloudflare 是站在贊助頻寬的角色提供服務:

Within Cloudflare’s infrastructure there is a set of machines which are responsible for pulling the latest version of the repo periodically. Those machines then become the origin for cdnjs.cloudflare.com, with Cloudflare’s Global Load Balancer automatically handling failures. Cloudflare’s cache automatically stores copies of many of the projects making it possible for us to deliver them quickly from all 195 of our data centers.

但更新的 bot 本身掛了,而且維護者沒時間修:

Unfortunately approximately thirty days ago one of those bots stopped working, preventing updated projects from appearing in CDNJS. The bot's open-source maintainer was not able to invest the time necessary to keep the bot running. After several weeks we were asked by the community and the CDNJS founders to take over maintenance of the CDNJS repo itself.

所以現在則是 Cloudflare 接手維護了:

This means the Cloudflare engineering team is taking responsibility for keeping the contents of github.com/cdnjs/cdnjs up to date, in addition to ensuring it is correctly served on cdnjs.cloudflare.com.

不過裡面也提到了一個問題,就是現在瀏覽器為了安全性,對於不同的站台會有不同的 cache,本來 cdnjs 的設計目的之一被大幅削弱,現在只剩下省頻寬了:

The future value of CDNJS is now in doubt, as web browsers are beginning to use a separate cache for every website you visit. It is currently used on such a wide swath of the web, however, it is unlikely it will be disappearing any time soon.

改善內嵌 YouTube 影片的載入速度

YouTube 的 embed 會載入大量的元件,所以就有專案把對使用者沒有意義的元件都拔掉:「Lite YouTube Embed」。

從比較可以看出來 Lite YouTube Embed 下載的元件少很多:

當然在功能上有差異,不過基本的功能應該都沒問題...

雖然還是 JavaScript 實做,但可以看到實際的程式碼大概 40 行而已?(註解的行數大約是程式碼的兩倍):「lite-youtube-embed/src/lite-yt-embed.js」。

不過要注意的是,程式碼中用到 ES6 的 class 語法,所以如果要考慮到 IE11,應該是要打包轉換...

AWS 提供 Machine Learning 能力的自動 Code Review 服務

AWS 推出了 Code Review 服務 Amazon CodeGuru,使用 machine learning 提供建議:「AWS announces Amazon CodeGuru for automated code reviews and application performance recommendations」。

從界面就可以看出來同時支援 GitHub 與自家的 CodeCommit,看起來可以給不少建議,但網站上沒有提到 security 這塊,本來以為產品的定位不在這邊:

不過 FAQ 裡還是有提到常見的 security issue:

Q: What type of issues are detected by Amazon CodeGuru Reviewer?

Amazon CodeGuru Reviewer checks for concurrency issues, potential race conditions, un-sanitized inputs, inappropriate handling of sensitive data such as credentials, resource leaks, and also detects race conditions in concurrent code.

然後 FAQ 裡提到目前只支援 Java:

Amazon CodeGuru Reviewer currently supports Java code stored in GitHub and AWS CodeCommit repositories.

服務的價位是使用行數計算,不過那個 per month 沒看懂是什麼意思:

Code scan (pull requests)$0.75 per 100 lines of code scanned per month

另外推出的 Amazon CodeGuru Profiler 則是 APM 類的東西,這塊目前市場上產品也很多,看起來也要被 AWS 進來蹂躪...

超快速的 Base64 encoding/decoding 實做

看到「Base64 encoding and decoding at almost the speed of a memory copy」這個,可以超級快速編解碼 Base64 的資料。

實做上是透過 IntelAVX-512 加速,在資料夠大的情況下 (超過 L1 cache 的大小),可以達到接近字串複製的速度 (這邊提到的 memcpy()):

We show how we can encode and decode base64 data at nearly the speed of a memory copy (memcpy) on recent Intel processors, as long as the data does not fit in the first-level (L1) cache. We use the SIMD (Single Instruction Multiple Data) instruction set AVX-512 available on commodity processors. Our implementation generates several times fewer instructions than previous SIMD-accelerated base64 codecs.

不過這樣 AMD 暫時要哭哭...

用程式解數學邏輯問題...

Hacker News Daily 上看到的數學邏輯問題:「“Which answer in this list is the correct answer to this question?”」。

問題是這樣:

Which answer in this list is the correct answer to this question?

  • All of the below.
  • None of the below.
  • All of the above.
  • One of the above.
  • None of the above.
  • None of the above.

accepted 的那個是推演的答案,但最高分的那個是寫程式窮舉 XDDD (不得不說大家都很愛這味...)

Google 與 Cloudflare 測試 Post-Quantum 演算法的成果

這幾年量子電腦的進展不斷有突破,雖然到對於攻擊現有的密碼學看起來還有一段時間,但總是得先開始研究對量子電腦有抵抗性的演算法...

其中 Google Chrome 的團隊與 Cloudflare 的團隊手上都有夠大的產品,兩個團隊合作測試的結果在學界與業界都還蠻重視的:「Real-world measurements of structured-lattices and supersingular isogenies in TLS」、「The TLS Post-Quantum Experiment」。

Google Chrome 這邊是使用了 Canary 與 Dev 兩個 channel,有控制組與兩個新的演算法:

Google Chrome installs, on Dev and Canary channels, and on all platforms except iOS, were randomly assigned to one of three groups: control (30%), CECPQ2 (30%), or CECPQ2b (30%). (A random ten percent of installs did not take part in the experiment so the numbers only add up to 90.)

這兩個演算法有優點也有缺點。一個是 key 比較小,但運算起來比較慢 (SIKE,CECPQ2b);另外一個是 key 比較大,但是運算比較快 (HRSS,CECPQ2):

For our experiment, we chose two algorithms: isogeny-based SIKE and lattice-based HRSS. The former has short key sizes (~330 bytes) but has a high computational cost; the latter has larger key sizes (~1100 bytes), but is a few orders of magnitude faster.

We enabled both CECPQ2 (HRSS + X25519) and CECPQ2b (SIKE/p434 + X25519) key-agreement algorithms on all TLS-terminating edge servers.

感覺還是會繼續嘗試,因為這兩個演算法的缺點都還是有點致命...