快速產生 SQLite 資料的方式:一分鐘內產生十億筆資料

在「Towards Inserting One Billion Rows in SQLite Under A Minute」這邊看到作者想要在一分鐘內在 MBP 2019 上面寫 1B 筆資料進 SQLite,裡面有些方法還蠻值得玩一下的,這台 MBP 2019 機器的規格是:

The machine I am using is MacBook Pro, 2019 (2.4 GHz Quad Core i5, 8GB, 256GB SSD, Big Sur 11.1)

第一版是 Python 寫的,塞 10M 筆花了 15 分鐘:

In this script, I tried to insert 10M rows, one by one, in a for loop. This version took close to 15 minutes, sparked my curiosity and made me explore further to reduce the time.

加了五個 PRAGMA 的版本變成 100M 筆十分鐘:

The naive for loop version took about 10 minutes to insert 100M rows.

用批次處理則可以降到八分半:

The batched version took about 8.5 minutes to insert 100M rows.

再來是拿經典神器 PyPy 出來用,降到兩分半:

All I had to do was run my existing code, without any change, using PyPy. It worked and the speed bump was phenomenal. The batched version took only 2.5 minutes to insert 100M rows. I got close to 3.5x speed :)

接下來就是跳槽到 Rust 了,中間也有不少 tuning 相關的討論,但直接先跳到最後面好了... 最後 100M 只用了 33 秒:

I created a threaded version, where I had one writer thread that received data from a channel and four other threads which pushed data to the channel. This is the current best version which took about 32.37 seconds.

能用 PyPy 的地方還是可以考慮一下的...

用 Python 的 DuckDB 下 SQL 指令翻 Parquet 的資料

在「Querying Parquet using DuckDB」這邊看到 DuckDB 這個東西,裡面引用的文章是「Querying Parquet with Precision using DuckDB」,可以直接對 Parquet 格式的資料下 SQL 找資料。

先前好像有看到 DuckDB 但沒有太注意,剛剛再次看到,然後玩了一下還蠻有趣的。DuckDB 支援蠻多程式語言與資料格式,不過這邊文章拿 Python 與 Parquet 玩還蠻有趣的...

先把 Parquet 的範例資料抓下來,然後透過 pip 裝 duckdb:

cd /tmp; wget https://github.com/cwida/duckdb-data/releases/download/v1.0/taxi_2019_04.parquet; pip install -U duckdb

然後進到 Python 3 的互動界面:

>>> import duckdb
>>> print(duckdb.query("SELECT COUNT(*) FROM 'taxi_2019_04.parquet' WHERE pickup_at BETWEEN '2019-04-15' AND '2019-04-20'").fetchall())
[(1276565,)]

然後在範例裡面,檔名的部份還可以用 *,看了一下說明,底層是 glob 類的用法:

DuckDB supports the globbing syntax, which allows it to query all three files simultaneously.

文章裡有提到速度比 Pandas 快很多,不過我覺得這好像不太能這樣比,會拿 Pandas 出來的時候常常是其他用法,但至少看起來速度是個 DuckDB 在意的點。

不過反而馬上想到的是,之後處理 CSV 之類的檔案應該也會試看看 DuckDB...

cURL 與 Travis CI 的事件

前幾天 Daniel Stenberg (cURL 的發明人與現在的維護者) 發表一篇從 Travis CI 搬出來的文章,換到 Zuul CICircle CI 上:「Bye bye Travis CI」,對應的 Hacker News 的討論可以在「Bye Bye Travis CI (haxx.se)」這邊翻到。

文章裡提到主要的兩個點是,Travis CI 當初有承諾會提供免費服務給 open source project:

今年的時候整個 Travis CI 的商業收費的機制改了,另外也嚴格增加了對 open source project 的限制,包括了你不能收到任何商業公司或是任何組織的贊助:

Project must not be sponsored by a commercial company or organization (monetary or with employees paid to work on the project)

另外 Daniel Stenberg 在文章裡也表明目前不打算付錢,要找市場上便宜可用的方案 (而目前看起來還有,至少 Zuul CI 與 Circle CI 都在選項內),所以就從 Travis CI 搬離了:

Lots of people have commented and think I’m “whining” about Travis CI charging for something that is useful and that I should rather just pay up. I could probably have gone with that but I dislike their broken promise and that they don’t consider us Open source anymore and I feel I have a responsibility to use the funds we get from gracious donors as wisely and economically as possible, and that includes using no-cost or cheap services rather than services charging thousands of dollars per year.

If there really were no other available and viable options, then paying could’ve been an alternative. Now, moving on to something else was the right choice for us.

然後 Travis CI 過了兩天丟出「Open Source Terms at Travis CI – An Update and Clarification」,雖然沒有表明是在講 cURL,但放在一起看其實也都大概知道發生什麼事情。

看了一下自己的小專案 (更新頻率不高,test case 也不多),丟 Circle CI 還算是夠用,另外自己也有弄個 GitLab,需要的時候也可以在上面跑 CI。

這邊另外看到 cURL 這種大型專案,因為 test case 數量很多,丟到不同的 CI vendor 上跑,看起來是個還不錯的架構...

Django 3.2 LTS

Django 3.2 LTS 出了:「Django 3.2 released」,對應的 release note 可以在「Django 3.2 release notes」這邊看到。

這個版本比較特別,一般版本提供大約 15 個月的支援,LTS 版本則提供 36 個月的支援,不過目前用起來的感覺還是鼓勵大家平常就要安排升級計畫,不然 3.2 升級到 4.2 鐵定是個超痛苦的過程。

昨天把 Django 3.1 的專案升級上 3.2 後,跑 test case 就遇到「Customizing type of auto-created primary keys」這邊描述的問題,目前先用 DEFAULT_AUTO_FIELD = 'django.db.models.AutoField' 解,其他的倒是沒什麼問題...

幾個我覺得有趣的,像是 JSONL 格式:

The new JSONL serializer allows using the JSON Lines format with dumpdata and loaddata. This can be useful for populating large databases because data is loaded line by line into memory, rather than being loaded all at once.

然後不支援 PostgreSQL 9.5 與 MySQL 5.6 了:

Upstream support for PostgreSQL 9.5 ends in February 2021. Django 3.2 supports PostgreSQL 9.6 and higher.

The end of upstream support for MySQL 5.6 is April 2021. Django 3.2 supports MySQL 5.7 and higher.

很久前寫 Django 1.x,然後就荒廢很久,最近是從 3.0 開始寫,有些東西熟一點以後覺得怪怪的,之後要找時間測一些東西,修正對 Django 的用法。

Mapbox GL JS 的授權改變,以及 MapLibre GL 的誕生

看到「MapLibre GL is a free and open-source fork of mapbox-gl-JS (github.com/maplibre)」這篇,翻了一下資料發現年初時 Mapbox GL JS 的軟體授權從 v2.0.0 開始變成不是 open source license (本來是 BSD license),而社群也馬上 fork 最後一個 open source 版本並且投入開發,變成 MapLibre GL

MapTiler 在年初的時候有提到這件事情:「MapLibre: Mapbox GL open-source fork」。

The community reacted swiftly: forks of the latest open-source version were made almost immediately by multiple parties. In another positive development, the community came together the next day and agreed to make this a joint effort, rather than splitting energies. A video call was organized and the MapLibre coalition was formed. It includes people working for MapTiler, Elastic, StadiaMaps, Microsoft, Ceres Imaging, WhereGroup, Jawg, Stamen Design, etc.

MapLibre GL 目前與本來 v1.13.0 相容,可以直接抽換過去 (後來在二月的時候有出一個 v1.13.1,不過那是在 v2.0.0 改 license 之後的事情了):

  "dependencies": {
-    "mapbox-gl": "^1.13.0"
+    "maplibre-gl": ">=1.14.0"
  }

記錄一下,以後要在網站上用的話,得注意到 Mapbox GL JS 在沒有註冊的情況下不能使用,而且 SDK 會強制蒐集資料:

Mapbox gl-js version 2.0 or higher (“Mapbox Web SDK”) must be used according to the Mapbox Terms of Service. This license allows developers with a current active Mapbox account to use and modify the Mapbox Web SDK. Developers may modify the Mapbox Web SDK code so long as the modifications do not change or interfere with marked portions of the code related to billing, accounting, and anonymized data collection. The Mapbox Web SDK only sends anonymized usage data, which Mapbox uses for fixing bugs and errors, accounting, and generating aggregated anonymized statistics. This license terminates automatically if a user no longer has an active Mapbox account.

不過如果是抓 OpenStreetMap 資料的話,Leaflet 應該還是目前的首選...

sscanf() 與 strlen() 的故事繼續發展

昨天在「GTA 的啟動讀取效能問題」這邊提到了 sscanf()strlen() 的問題,剛剛在 Hacker News Daily 上又看到一篇「It Can Happen to You」,在講他自己的專案也中獎。

他提到了一個解法,用 strtof() 取代 sscanf() 讀數字,結果大幅降低了 parsing 的時間:

Replacing the sscanf call with strtof improved startup by nearly a factor of 10: from 1.8 seconds to 199 milliseconds.

文章的最後面題到了不少目前正在進行中的討論與 patch。

首先是 FreeBSD 上的 patch 已經在測試:「address a performance problem w/ partial sscanf on long strings...」,裡面可以看到有很小心的在研究會不會造成 performance regression。

然後是 glibc 這邊,在 2014 年就有被開了一張票提出來:「Bug 17577 - sscanf extremely slow on large strings」,不過下面只是多了幾個 comment,目前沒有任何進度。

然後是 cppreference.com 上的「std::scanf, std::fscanf, std::sscanf」頁面則是加注了複雜度的問題:

Complexity

Not guaranteed. Notably, some implementations of sscanf are O(N), where N = std::strlen(buffer) [1]. For performant string parsing, see std::from_chars.

感覺接下來應該還會有更多人提出自己的災情,或是有人發現某個跑很慢的專案也是因為這個原因...

網頁本身是個合法的 JSON 的展示

Hacker News Daily 上看到「Web Data Render」這個有趣的東西,這個網頁本身是個 JSON,透過一些技巧載入 javascript 後就可以讀取資料 render...

不過網頁本身就不是合法的 HTML 了:「Showing results for https://webdatarender.com/」,只能算是個有趣的作法...

日本郵政的 CSV 資料

看到「Parsing the Infamous Japanese Postal CSV」這篇在抱怨日本郵政提供的 CSV 資料超級糟糕的問題覺得頗「有趣」的,在 Hacker News 上也有人「同病相憐」XDDD:「Parsing the Infamous Japanese Postal CSV (dampfkraft.com)」。

文章作者是「posuto」這個套件的維護人,這個套件讓開發者可以很方便的在 Python 下從郵遞區號推出對應的地點資訊:

import posuto as 〒

? = 〒.get('〒105-0011')

print(?)
# "東京都港区芝公園"
print(?.prefecture)
# "東京都"
print(?.kana)
# "トウキョウトミナトクシバコウエン"
print(?.romaji)
# "Tokyo To, Minato Ku, Shibakoen"
print(?.note)
# None

作者建立的資料是從「読み仮名データの促音・拗音を小書きで表記するもの(zip形式)」這邊取得並且分析,然後這篇文章就是在描述這些 CSV 檔的資料裡面有超級多奇怪的例外,用機器讀取超哭爸的 XDDD (好像不怎麼意外?)

不過最有趣的應該還是他提到的這個:

Oh, and if you need a Win3.1 or DOS program to copy the data onto an IBM H floppy disk, just check the bottom of JP Post's page - they've got you covered.

看起來是頁面下方的這塊:

這就真的太哭爸了啊 XDDD

Google 與 AWS 都釋出往 OpenTelemetry 靠攏的消息

前幾天看到 GoogleAWS 都釋出往 OpenTelemetry 靠攏的消息:「OpenTelemetry's First Release Candidates」以及「Public Preview – AWS Distro for OpenTelemetry」。

AWS 這邊的 AWS X-Ray 看起來跟 OpenTelemetry 有點關係,找了一下果然發現之前有些計畫在跑:「AWS X-Ray SDK w/ OpenTelemetry API」,不過看起來後續應該是由「AWS Distro for OpenTelemetry」這個計畫接手了。

另外 Google 這邊看起來則是 Cloud Trace 這個產品線,本來在推的 OpenCensusOpenTracing 從網站上可以看到決定會再支援兩年,但重心會改放到 OpenTelemetry 上:

OpenCensus and OpenTracing have merged to form OpenTelemetry, which serves as the next major version of OpenCensus and OpenTracing. OpenTelemetry will offer backwards compatibility with existing OpenCensus integrations, and we will continue to make security patches to existing OpenCensus libraries for two years.

翻了一下 Azure 相關的消息,先前有一些稿子是會往 OpenTelemetry 支援,但這一波沒看到新聞稿...

DuckDB

看到篇有趣的介紹,在講 DuckDB:「DuckDB」。

[I]t uses the PostgreSQL parser but models itself after SQLite in that databases are a single file and the code is designed for use as an embedded library, distributed in a single amalgamation C++ file (SQLite uses a C amalgamation).

看起來是個以 OLAP 為中心而設計出來的資料庫,然後在 Python 下可以直接透過 pip 裝起來。

看起來像是個用單機拼 throughput 的東西,但提供大家熟悉的界面。

Hacker News 上可以看到「DuckDB – An embeddable SQL database like SQLite, but supports Postgres features (duckdb.org)」這邊給了不少方向,