Home » Posts tagged "transaction"

EnterpriseDB 打算推出的 zheap,想要解 VACUUM 問題...

前天被問到「DO or UNDO - there is no VACUUM」這篇,回家後仔細看一看再翻了一些資料,看起來是要往 InnoDB 的解法靠...

PostgreSQL 與 InnoDB 都是透過 MVCC 的概念實做 transaction 之間的互動,但兩者實際的作法不太一樣。其中帶來一個明顯的差異就是 PostgreSQL 需要 VACUUM。這點在同一篇作者八年前 (2011) 的文章就有提過兩者的差異以及優缺點:「MySQL vs. PostgreSQL, Part 2: VACUUM vs. Purge」。

UPDATE 時,InnoDB 會把新資料寫到表格內,然後把可能會被 rollback 的舊資料放到表格外:

In InnoDB, only the most recent version of an updated row is retained in the table itself. Old versions of updated rows are moved to the rollback segment, while deleted row versions are left in place and marked for future cleanup. Thus, purge must get rid of any deleted rows from the table itself, and clear out any old versions of updated rows from the rollback segment.

而被 DELETE 清除的資料則是由 purge thread 處理:

All the information necessary to find the deleted records that might need to be purged is also written to the rollback segment, so it's quite easy to find the rows that need to be cleaned out; and the old versions of the updated records are all in the rollback segment itself, so those are easy to find, too.

所以可以在 InnoDB 看到 purge thread 相關的設定:「MySQL :: MySQL 5.7 Reference Manual :: 14.6.11 Configuring InnoDB Purge Scheduling」,負責處理這些東西。

而在 PostgreSQL 的作法則是反過來,舊的資料放在原來地方,新資料另外存:

PostgreSQL takes a completely different approach. There is no rollback tablespace, or anything similar. When a row is updated, the old version is left in place; the new version is simply written into the table along with it.

新舊資料的位置其實還好,主要是因為沒有類似的地方可以記錄哪些要清:

Lacking a centralized record of what must be purged, PostgreSQL's VACUUM has historically needed to scan the entire table to look for records that might require cleanup.

這也使得 PostgreSQL 裡需要 autovacuum 之類的程序去掃,或是手動跑 vacuum。而在去年 (2017) 的文章裡也有提到目前還是類似的情況:「MVCC and VACUUM」。

而在今年 (2018) 的文章裡,EnterpriseDB 就提出了 zheap 的想法,在 UPDATE 時寫到 table 裡,把可能被 rollback 的資料放到 undo log 裡。其實就是把 InnoDB 那套方法拿過來用,只是整篇都沒提到而已 XD:

That brings me to the design which EnterpriseDB is proposing. We are working to build a new table storage format for PostgreSQL, which we’re calling zheap. In a zheap, whenever possible, we handle an UPDATE by moving the old row version to an undo log, and putting the new row version in the place previously occupied by the old one. If the transaction aborts, we retrieve the old row version from undo and put it back in the original location; if a concurrent transaction needs to see the old row version, it can find it in undo. Of course, this doesn’t work when the block is full and the row is getting wider, and there are some other problem cases as well, but it covers many useful cases. In the typical case, therefore, even bulk updates do not force a zheap to grow. Instead, the undo grows. When a transaction commits, all row versions that will become dead are in the undo, not the zheap.

不過馬上就會想到問題,如果要改善問題,不是個找地方記錄哪些位置要回收就好了嗎?順便改變方法是為了避免 fragment 嗎?

等著看之後變成什麼樣子吧...

Stripe 也宣佈要停止支援 Bitcoin 了

Stripe 發了公告出來,要停止支援 Bitcoin:「Ending Bitcoin Support」。預定保留三個月的緩衝期,在 2018 年 4 月 23 日停掉:

Over the next three months we will work with affected Stripe users to ensure a smooth transition before we stop processing Bitcoin transactions on April 23, 2018.

跟其他單位停用的原因都差不多,愈來愈慢的交易速度與愈來愈高的手續費:

Transaction confirmation times have risen substantially; this, in turn, has led to an increase in the failure rate of transactions denominated in fiat currencies. (By the time the transaction is confirmed, fluctuations in Bitcoin price mean that it’s for the “wrong” amount.) Furthermore, fees have risen a great deal. For a regular Bitcoin transaction, a fee of tens of U.S. dollars is common, making Bitcoin transactions about as expensive as bank wires.

Steam 當時的理由很類似... (參考「Steam 停止使用 Bitcoin 購買遊戲」)

Steam 停止使用 Bitcoin 購買遊戲

Steam 宣佈停止使用 Bitcoin 購買遊戲:「Steam is no longer supporting Bitcoin」。

官方提到的原因是因為交易費用太高 (雖然是讓使用者付):

In the past few months we've seen an increase in the volatility in the value of Bitcoin and a significant increase in the fees to process transactions on the Bitcoin network. For example, transaction fees that are charged to the customer by the Bitcoin network have skyrocketed this year, topping out at close to $20 a transaction last week (compared to roughly $0.20 when we initially enabled Bitcoin).

另外一個原因是波動問題:

Historically, the value of Bitcoin has been volatile, but the degree of volatility has become extreme in the last few months, losing as much as 25% in value over a period of days.

所以這樣推測,Steam 不是直接換成法幣?我記得他們合作的交易所 (BitPay) 可以馬上換成法幣...

InnoDB 的 MVCC 繁忙時的效能問題

Facebook 上看到 Percona 的人修正了 InnoDB 的 MVCC 在繁忙時會有 O(n^2) 的效能問題:

MySQL 官方的 bug tracking system 是「InnoDB's MVCC has O(N^2) behaviors」這個,可以看到給的重製範例是在 transaction 內大量塞 INSERT 進去後,另外一個 transaction 使用 secondary index 就會受到影響。

裡面也有提到「Secondary index updates make consistent reads do O(N^2) undo page lookups」,雖然修正了,但看起來跟當時實做的規劃有關?所以導致許多地方都是 O(n^2)...

這個 bug 感覺是批次作業的行為?因為批次作業可能會用 transaction 包起來,一次寫入萬筆資料後再 COMMIT 進去。而這個行為很有機會觸發這個 bug,導致影響到線上的服務...

MySQL 的 XA Transaction

Percona 的「How to Deal with XA Transactions Recovery」這篇提到 MySQLXA Transaction 的復原問題。

XA Transaction 主要拿來做分散式上的事物交易,在官方文件上就有提到對應的標準:

The MySQL XA implementation is based on the X/Open CAE document Distributed Transaction Processing: The XA Specification. This document is published by The Open Group and available at http://www.opengroup.org/public/pubs/catalog/c193.htm.

不過我覺得比較有趣的是這點,Percona 直接建議不要用 5.6 的 XA:

But there is a limitation in 5.6: you can only XA commit/rollback transactions that belong to your session. That means after a crash you are out of luck. To get rid of these you need to promote a slave or perform a logical dump and restore. The best plan is to avoid the use of XA transactions with 5.6.

不過翻了 bug report,Oracle 官方的 5.7 還是沒解決,看起來是 Percona 自己 patch 掉 XDDD

用 Go 寫的 Badger

Dgraph 在推銷自家發展出來的 Badger:「Introducing Badger: A fast key-value store written natively in Go」。

標靶是 RocksDB,號稱比 RocksDB 快好幾倍:

Based on benchmarks, Badger is at least 3.5x faster than RocksDB when doing random reads. For value sizes between 128B to 16KB, data loading is 0.86x - 14x faster compared to RocksDB, with Badger gaining significant ground as value size increases. On the flip side, Badger is currently slower for range key-value iteration, but that has a lot of room for optimization.

不過我覺得有些重要的功能在 Badger 不提供,這比起來有種橘子比蘋果的感覺... 像是 RocksDB 提供了 Transaction,而 Badger 則是直接講明他們不打算支援 Transaction:

Keep it simple, stupid. No support for transactions, versioning or snapshots -- anything that can be done outside of the store should be done outside.

MySQL GTID Replication 的惡搞修復

Percona 的「Database Daily Ops Series: GTID Replication」這篇在講當 MySQL 的 GTID Replication 爛掉時可能的修法,算是頗惡搞的方法,修好後還是要跑 pt-table-checksum 確認兩邊的資料是否一致,如果有狀況的話還是得拿出 pt-table-sync 同步。

第一招是用 pt-slave-restart,跳過會造成問題 SQL,讓他強制同步 (唔):

This passes the master’s UUID and it skips all global transactions breaking replication on a specific slave server[.]

第二招是 mysqlslavetrx,也是類似的作法,只是拿的是 MySQL 官方的工具來惡搞...

第三招是 Inject a Fake Transaction,其實就是手動自己做 XDDD

所以不管是哪招,做完後還是要記得跑 pt-table-{checksum,sync} 收尾,不然還是會爛掉...

MyRocks 與 InnoDB 對於不同硬碟效能的差異

在「MyRocks: use less IO on writes to have more IO for reads」這邊有提到當使用 Disk Array、Slow SSD 與 Fast SSD 時效能 MyRocks 與 InnoDB 的效能差異。

可以看到當 I/O 層能提供的 IOPS capacity 愈高,MyRocks 與 InnoDB 之間的差異就愈低... 換句話說,RocksDB 對 IOPS 比較有效率的應用,這點在「Why is MyRocks more write-efficient than InnoDB?」這邊也可以看出一些說明 (不過這篇的 InnoDB 完全沒用 COMPRESSED)。

算是推銷... 不過可以持續關注看看 :o

InnoDB 的 Isolation Level 以及 Performance Schema 對效能的影響

雖然 Mark Callaghan 現在的主力都在 MyRocks 上,但他還是對 InnoDB 上的效能頗關注 (畢竟是個成熟而且競爭的產品)。而這篇「Sysbench, InnoDB, transaction isolation and the performance schema」講到 MySQL 5.6.26 裡的 InnoDB,了解 isolation level 與 performance schema 對效能的差異。結果可以在這邊翻到。

關掉 performance schema 會讓效能變好是預期的,不過看起來比預期小很多。另外某些情況下 RR (REPEATABLE-READ) 的效能會比 RC (READ-COMMITTED) 好倒是頗意外,這邊也有給出原因:

Using repeatable-read boosts performance because it reduces the mutex contention from getting a consistent read snapshot as that is done once per transaction rather than once per statement.

不過看了看數據,純粹讀取的部份 RC 會在某些地方快一些,不過整體來說在 MySQL 5.6.26 上的 RR 與 RC 差異真的不算太明顯了...

Facebook 備份 MySQL 資料並且確認正確性的方法

Facebook 再多花了一些篇幅數對於 MySQL 資料備份以及確認正確性的方法:「Continuous MySQL backup validation: Restoring backups」。

首先是 Continuous Restore Tier (CRT) 這塊,可以看到他們在這塊很仰賴 HDFS 當作備份的第一層基地,包括了 Full logical backups (用 mysqldump)、Differential (diff) backups 以及 Binary log (binlog) backups (stream 進 HDFS)。

另外上了 GTID,對於後續的處理會比較方便:

All of our database servers also use global transaction IDs (GTIDs), which gives us another layer of control when replaying transactions from binlog backups.

在 CRT 這塊可以看到其實是拿現成的工具堆起來,不同單位會因為規模而有不同的作法。真正的重點反而在 ORC Restore Coordinator (ORC) 這塊,可以看到 Facebook 開發了大量的程式將回復這件事情自動化處理:

在收到回復的需求後,可以看到 Peon 會從 HDFS 拉資料出來,並且用 binlog replay 回去:

Peons contain all relevant logic for retrieving backups from HDFS, loading them into their local MySQL instance, and rolling them forward to a certain point in time by replaying binlogs. Each restore job a peon works on goes through these five stages[.]

也是因為 Facebook 對 MySQL 的用量大到需要自動化這些事情,才有這些東西...

Archives