Steam 停止使用 Bitcoin 購買遊戲

Steam 宣佈停止使用 Bitcoin 購買遊戲:「Steam is no longer supporting Bitcoin」。

官方提到的原因是因為交易費用太高 (雖然是讓使用者付):

In the past few months we've seen an increase in the volatility in the value of Bitcoin and a significant increase in the fees to process transactions on the Bitcoin network. For example, transaction fees that are charged to the customer by the Bitcoin network have skyrocketed this year, topping out at close to $20 a transaction last week (compared to roughly $0.20 when we initially enabled Bitcoin).

另外一個原因是波動問題:

Historically, the value of Bitcoin has been volatile, but the degree of volatility has become extreme in the last few months, losing as much as 25% in value over a period of days.

所以這樣推測,Steam 不是直接換成法幣?我記得他們合作的交易所 (BitPay) 可以馬上換成法幣...

InnoDB 的 MVCC 繁忙時的效能問題

Facebook 上看到 Percona 的人修正了 InnoDB 的 MVCC 在繁忙時會有 O(n^2) 的效能問題:

MySQL 官方的 bug tracking system 是「InnoDB's MVCC has O(N^2) behaviors」這個,可以看到給的重製範例是在 transaction 內大量塞 INSERT 進去後,另外一個 transaction 使用 secondary index 就會受到影響。

裡面也有提到「Secondary index updates make consistent reads do O(N^2) undo page lookups」,雖然修正了,但看起來跟當時實做的規劃有關?所以導致許多地方都是 O(n^2)...

這個 bug 感覺是批次作業的行為?因為批次作業可能會用 transaction 包起來,一次寫入萬筆資料後再 COMMIT 進去。而這個行為很有機會觸發這個 bug,導致影響到線上的服務...

MySQL 的 XA Transaction

Percona 的「How to Deal with XA Transactions Recovery」這篇提到 MySQLXA Transaction 的復原問題。

XA Transaction 主要拿來做分散式上的事物交易,在官方文件上就有提到對應的標準:

The MySQL XA implementation is based on the X/Open CAE document Distributed Transaction Processing: The XA Specification. This document is published by The Open Group and available at http://www.opengroup.org/public/pubs/catalog/c193.htm.

不過我覺得比較有趣的是這點,Percona 直接建議不要用 5.6 的 XA:

But there is a limitation in 5.6: you can only XA commit/rollback transactions that belong to your session. That means after a crash you are out of luck. To get rid of these you need to promote a slave or perform a logical dump and restore. The best plan is to avoid the use of XA transactions with 5.6.

不過翻了 bug report,Oracle 官方的 5.7 還是沒解決,看起來是 Percona 自己 patch 掉 XDDD

用 Go 寫的 Badger

Dgraph 在推銷自家發展出來的 Badger:「Introducing Badger: A fast key-value store written natively in Go」。

標靶是 RocksDB,號稱比 RocksDB 快好幾倍:

Based on benchmarks, Badger is at least 3.5x faster than RocksDB when doing random reads. For value sizes between 128B to 16KB, data loading is 0.86x - 14x faster compared to RocksDB, with Badger gaining significant ground as value size increases. On the flip side, Badger is currently slower for range key-value iteration, but that has a lot of room for optimization.

不過我覺得有些重要的功能在 Badger 不提供,這比起來有種橘子比蘋果的感覺... 像是 RocksDB 提供了 Transaction,而 Badger 則是直接講明他們不打算支援 Transaction:

Keep it simple, stupid. No support for transactions, versioning or snapshots -- anything that can be done outside of the store should be done outside.

MySQL GTID Replication 的惡搞修復

Percona 的「Database Daily Ops Series: GTID Replication」這篇在講當 MySQL 的 GTID Replication 爛掉時可能的修法,算是頗惡搞的方法,修好後還是要跑 pt-table-checksum 確認兩邊的資料是否一致,如果有狀況的話還是得拿出 pt-table-sync 同步。

第一招是用 pt-slave-restart,跳過會造成問題 SQL,讓他強制同步 (唔):

This passes the master’s UUID and it skips all global transactions breaking replication on a specific slave server[.]

第二招是 mysqlslavetrx,也是類似的作法,只是拿的是 MySQL 官方的工具來惡搞...

第三招是 Inject a Fake Transaction,其實就是手動自己做 XDDD

所以不管是哪招,做完後還是要記得跑 pt-table-{checksum,sync} 收尾,不然還是會爛掉...

MyRocks 與 InnoDB 對於不同硬碟效能的差異

在「MyRocks: use less IO on writes to have more IO for reads」這邊有提到當使用 Disk Array、Slow SSD 與 Fast SSD 時效能 MyRocks 與 InnoDB 的效能差異。

可以看到當 I/O 層能提供的 IOPS capacity 愈高,MyRocks 與 InnoDB 之間的差異就愈低... 換句話說,RocksDB 對 IOPS 比較有效率的應用,這點在「Why is MyRocks more write-efficient than InnoDB?」這邊也可以看出一些說明 (不過這篇的 InnoDB 完全沒用 COMPRESSED)。

算是推銷... 不過可以持續關注看看 :o

InnoDB 的 Isolation Level 以及 Performance Schema 對效能的影響

雖然 Mark Callaghan 現在的主力都在 MyRocks 上,但他還是對 InnoDB 上的效能頗關注 (畢竟是個成熟而且競爭的產品)。而這篇「Sysbench, InnoDB, transaction isolation and the performance schema」講到 MySQL 5.6.26 裡的 InnoDB,了解 isolation level 與 performance schema 對效能的差異。結果可以在這邊翻到。

關掉 performance schema 會讓效能變好是預期的,不過看起來比預期小很多。另外某些情況下 RR (REPEATABLE-READ) 的效能會比 RC (READ-COMMITTED) 好倒是頗意外,這邊也有給出原因:

Using repeatable-read boosts performance because it reduces the mutex contention from getting a consistent read snapshot as that is done once per transaction rather than once per statement.

不過看了看數據,純粹讀取的部份 RC 會在某些地方快一些,不過整體來說在 MySQL 5.6.26 上的 RR 與 RC 差異真的不算太明顯了...

Facebook 備份 MySQL 資料並且確認正確性的方法

Facebook 再多花了一些篇幅數對於 MySQL 資料備份以及確認正確性的方法:「Continuous MySQL backup validation: Restoring backups」。

首先是 Continuous Restore Tier (CRT) 這塊,可以看到他們在這塊很仰賴 HDFS 當作備份的第一層基地,包括了 Full logical backups (用 mysqldump)、Differential (diff) backups 以及 Binary log (binlog) backups (stream 進 HDFS)。

另外上了 GTID,對於後續的處理會比較方便:

All of our database servers also use global transaction IDs (GTIDs), which gives us another layer of control when replaying transactions from binlog backups.

在 CRT 這塊可以看到其實是拿現成的工具堆起來,不同單位會因為規模而有不同的作法。真正的重點反而在 ORC Restore Coordinator (ORC) 這塊,可以看到 Facebook 開發了大量的程式將回復這件事情自動化處理:

在收到回復的需求後,可以看到 Peon 會從 HDFS 拉資料出來,並且用 binlog replay 回去:

Peons contain all relevant logic for retrieving backups from HDFS, loading them into their local MySQL instance, and rolling them forward to a certain point in time by replaying binlogs. Each restore job a peon works on goes through these five stages[.]

也是因為 Facebook 對 MySQL 的用量大到需要自動化這些事情,才有這些東西...

PostgreSQL 對 Vacuum 效能的改善

在「No More Full-Table Vacuums」這邊提到了 PostgreSQL 在 vacuum 時效能的大幅改善,尤其是大型資料庫在 vacuum 時需要對整個表格從頭到尾掃一次以確保 transaction id 的正確性:

Current releases of PostgreSQL need to read every page in the database at least once every 2 billion write transactions (less, with default settings) to verify that there are no old transaction IDs on that page which require "freezing".

這動作在資料量大的機器上就會吃大量資源導致各種討厭的現象:

All of a sudden, when the number of transaction IDs that have been consumed crosses some threshold, autovacuum begins processing one or more tables, reading every page. This consumes much more I/O bandwidth, and exerts much more cache pressure on the system, than a standard vacuum, which reads only recently-modified page.

而作者送了 patch 改成只會讀還沒搞定的部份:

Instead of whole-table vacuums, we now have aggressive vacuums, which will read every page in the table that isn't already known to be entirely frozen.

要注意的是,agreesive vacuum 相較於 vacuum 會多吃很多資源,但可以打散掉 (有點像一次大 GC 導致 lag 與多次 minor GC 讓程式反應時間變得比較順暢的比較):

An aggressive vacuum still figures to read more data than a regular vacuum, possibly a lot more. But at least it won't read the data that hasn't been touched since the last aggressive vacuum, and that's a big improvement.

這個功能預定在 PostgreSQL 9.6 出現,不知道會不會變 default...

PostgreSQL 9.5 釋出,UPSERT!

PostgreSQL 9.5 正式發行,這次新增了大家期待已久的 UPSERT 功能:「PostgreSQL 9.5: UPSERT, Row Level Security, and Big Data」。

SQL:2003 正式定義出 UPSERT,被稱為 Merge,不過看網路上一般還是比較習慣 UPSERT 這個用法:

A relational database management system uses SQL MERGE (also called upsert) statements to INSERT new records or UPDATE existing records depending on whether condition matches.

也就是當沒資料的時候就 INSERT,有資料的時候就 UPDATE 的語法。常見的使用情境是拿來當 counter 用 (雖然這很傷資料庫的效能)。

沒有 UPSERT 的時候只能用 transaction 或是 store procedure 搭出來,效能上會比在 database engine 裡實作來的差,所以 UPSERT 還是被實作出來了。