1990 年代俄羅斯人用 VHS 帶 (錄影帶) 備份數位資料的方法:ArVid

Hacker News 上看到「ArVid: how Russians squeezed 4 hard drives into one VHS tape in the 90s」這篇,在 1990 年代俄羅斯人發明了用 VHS 帶 (錄影帶) 備份數位資料的方式,這個套件叫做 ArVid

方法是利用家裡已經有的 VHS 機 (錄影機),然後在 386 的電腦上接一張 ISA 介面的卡 (對比現在的電腦環境就是 PCI-e 介面卡),然後把 ISA 卡接到 VHS 機的 Video In (負責備份資料) 與 Video Out (負責取回資料),另外 ISA 卡還有一個紅外線 LED 發射的模組線可以接到 (貼到) VHS 機器的接收處,這樣可以讓 ISA 卡透過「遙控器」的協定控制 VHS 播放器。

這個點子用的媒介其實類似於磁帶機,只是 ArVid 為了使用現成的 VHS 機,多了一個轉換成影像的步驟。

這邊 ArVid 加上了 Hamming code,提供之後讀取時,發現錯誤以及修正的能力。

三個小時的 VHS 帶可以存 2GB 的資料,這個空間大小的感覺拉一下「History of hard disk drives」這頁的資訊,可以感覺一下 1990 年代前期時這樣的東西大概是什麼感覺:

1990 – IBM 0681 "Redwing" – 857 megabytes, twelve 5.25-inch disks. First HDD with PRML Technology (Digital Read Channel with 'partial-response maximum-likelihood' algorithm).

1991 – Areal Technology MD-2060 – 60 megabytes, one 2.5-inch disk platter. First commercial hard drive with platters made from glass.

1991 – IBM 0663 "Corsair" – 1,004 megabytes, eight 3.5-inch disks; first HDD using magnetoresistive heads

1991 – Intégral Peripherals 1820 "Mustang" – 21.4 megabytes, one 1.8-inch disk, first 1.8-inch HDD

1992 – HP Kittyhawk – 20 MB, first 1.3-inch hard-disk drive

是個很有趣的產品啊...

Amazon 的 Elasticsearch 服務提供十四天免費 hourly snapshot

Amazon Elasticsearch Service 提供 14 天免費的 hourly snapshot:「Amazon Elasticsearch Service increases data protection with automated hourly snapshots at no extra charge」。

Amazon Elasticsearch Service has increased its snapshot frequency from daily to hourly, providing more granular recovery points. If you need to restore your cluster, you now have numerous, recent snapshots to choose from. These automated snapshots are retained for 14 days at no extra charge.

不過這是 5.3+ 版本才有,舊版只有 daily:

  • For domains running Elasticsearch 5.3 and later, Amazon ES takes hourly automated snapshots and retains up to 336 of them for 14 days.
  • For domains running Elasticsearch 5.1 and earlier, Amazon ES takes daily automated snapshots (during the hour you specify) and retains up to 14 of them for 30 days.

In both cases, the service stores the snapshots in a preconfigured Amazon S3 bucket at no additional charge. You can use these automated snapshots to restore domains.

算是方便管理...

Amazon DynamoDB 的 Point-In-Time Recovery

Amazon DynamoDB 在 3/26 發出來的功能,以秒為單位的備份與還原機制:「New – Amazon DynamoDB Continuous Backups and Point-In-Time Recovery (PITR)」。

先打開這個功能:

打開後就會開始記錄,最多可以還原 35 天內的任何一個時間點的資料:

DynamoDB can back up your data with per-second granularity and restore to any single second from the time PITR was enabled up to the prior 35 days.

這時候就算改變資料或是刪除資料,實際上在系統內都是 Copy-on-write 操作,所以需要另外的空間,這部份會另外計價:

Pricing for continuous backups is detailed on the DynamoDB Pricing Pages. Pricing varies by region and is based on the current size of the table and indexes. For example, in US East (N. Virginia) you pay $0.20 per GB based on the size of the data and all local secondary indexes.

有這樣的功能通常是一開始設計時就有考慮 (讓底層的資料結構可以很方便的達成這樣的效果),現在只是把功能實作出來... 像 MySQL 之類的軟體就沒辦法弄成這樣 XDDD

最後有提到支援的地區,是用條列的而不是說所有有 Amazon DynamoDB 的區域都支援:

PITR is available in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), and South America (Sao Paulo) Regions starting today.

比對一下,應該是巴黎與美國政府用的區域沒進去... 一個是去年年底開幕的區域,另一個是本來上新功能就偏慢的區域。

Amazon DynamoDB 跨區 Replication 以及備份

Amazon DynamoDB 實做了全球性的 replication,以及備份功能:「Amazon DynamoDB Update – Global Tables and On-Demand Backup」。

跨區 replication 的功能讓每個 region 都可以存取當地機房的 DynamoDB:

Global Tables – You can now create tables that are automatically replicated across two or more AWS Regions, with full support for multi-master writes, with a couple of clicks. This gives you the ability to build fast, massively scaled applications for a global user base without having to manage the replication process.

這有點類似 GoogleCloud Spanner 在前陣子也推出全球性服務,但 DynamoDB 提供的比較偏向 NoSQL 而不是 RDBMS。

另外一個限制是跨區同步是 async,會有 replication lag 的問題:

Updates are propagated to other Regions asynchronously via DynamoDB Streams and are typically complete within one second (you can track this using the new ReplicationLatency and PendingReplicationCount metrics).

不過如果是這樣的機制,conflict 的問題不知道怎麼解決... 文章裡面沒看到。

然後目前支援的區域還是有限:

Global Tables are available in the US East (Ohio), US East (N. Virginia), US West (Oregon), EU (Ireland), and EU (Frankfurt) Regions today, with more Regions in the works for 2018.

另外一個是備份與還原機制,有這樣的功能對很多計畫方便不少:

On-Demand Backup – You can now create full backups of your DynamoDB tables with a single click, and with zero impact on performance or availability. Your application remains online and runs at full speed. Backups are suitable for long-term retention and archival, and can help you to comply with regulatory requirements.

而備份還原機制是陸陸續續開放的,區域也有限:

We are rolling this new feature out on an account-by-account basis as quickly as possible, with initial availability in the US East (Northern Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) Regions.

用 Percona Toolkit 備份的 MySQL 可以直接還原到 Amazon RDS 上

AWS 宣佈 Amazon RDS for MySQL 支援從 Percona Toolkit 備份出來的檔案還原了:「Easily restore an Amazon RDS MySQL database from your MySQL backup」。

Starting today you can easily restore a new Amazon RDS for MySQL database instance from a backup of your existing MySQL database, including MySQL databases running on Amazon EC2 or outside of AWS. This is done by creating a backup using the Percona XtraBackup tool and uploading the resulting files to an Amazon S3 bucket. You then create a new Amazon RDS DB Instance from the backup files in Amazon S3, directly through the RDS Console or AWS Command Line Interface.

備份出來後放到 Amazon S3 上,然後就可以讓 RDS 拉進去了...

This feature is available in all AWS Commercial regions for databases using MySQL version 5.6.

目前在 commercial region 都可以用了,所以代表 GovCloud (US) 還沒 (不過一般情況也沒權限碰到)。

不過他只說 5.6,代表 5.7 還不支援嗎?反正最差的情況就是再升一次 5.6 到 5.7?

Facebook 備份 MySQL 資料並且確認正確性的方法

Facebook 再多花了一些篇幅數對於 MySQL 資料備份以及確認正確性的方法:「Continuous MySQL backup validation: Restoring backups」。

首先是 Continuous Restore Tier (CRT) 這塊,可以看到他們在這塊很仰賴 HDFS 當作備份的第一層基地,包括了 Full logical backups (用 mysqldump)、Differential (diff) backups 以及 Binary log (binlog) backups (stream 進 HDFS)。

另外上了 GTID,對於後續的處理會比較方便:

All of our database servers also use global transaction IDs (GTIDs), which gives us another layer of control when replaying transactions from binlog backups.

在 CRT 這塊可以看到其實是拿現成的工具堆起來,不同單位會因為規模而有不同的作法。真正的重點反而在 ORC Restore Coordinator (ORC) 這塊,可以看到 Facebook 開發了大量的程式將回復這件事情自動化處理:

在收到回復的需求後,可以看到 Peon 會從 HDFS 拉資料出來,並且用 binlog replay 回去:

Peons contain all relevant logic for retrieving backups from HDFS, loading them into their local MySQL instance, and rolling them forward to a certain point in time by replaying binlogs. Each restore job a peon works on goes through these five stages[.]

也是因為 Facebook 對 MySQL 的用量大到需要自動化這些事情,才有這些東西...