Home » Posts tagged "database" (Page 3)

DNSFilter 使用 InfluxDB 與 TimescaleDB 的過程

DNSFilter 這篇講 InfluxDBTimescaleDB 的文章頗有趣的:「Towards 3B time-series data points per day: Why DNSFilter replaced InfluxDB with TimescaleDB」。

在沒有實際用過之前,其實都只能算是一方之詞... 另外這種轉換其實也跟每個公司內的組織組成有關,像是熟悉 PostgreSQL 的單位就比較有機會用 TimescaleDB 解決 time series data 的問題。

不過有個地方倒是讓我想記錄起來:

Comparing TimescaleDB to InfluxDB at the same time — we realized we were losing data. InfluxDB relied on precisely timed execution of rollup commands to process the last X minutes of data into rollups. Combined with our series of rollups, we realized that some slow queries were causing us to lose data. The TimescaleDB data had 1–5% more entries! Also we no longer had to deal with cardinality issues, and could show our customers every last DNS request, even at a monthly rollup.

會掉資料等於是跟 InfluxDB 的使用者發出警訊,要大家確認自己手上的資料是否正確... 這對於正確性要求 100% 的應用就不是開玩笑了 @_@

Microsoft SQL Server 可以跑在 t2.large 與 t2.xlarge 了...

AWS 宣佈 Microsoft SQL Server 可以跑在 t2 系列的機器上了:「Amazon EC2 T2 instance types are now supported on Windows with SQL Server Enterprise」。

不過應該是因為記憶體限制,目前只開放 t2.xlarge (8GB RAM) 與 t2.2xlarge (16GB RAM) 上可以跑:

Windows with SQL Server Enterprise Edition is now available on t2.xlarge and t2.2xlarge instance types.

馬上可以想到的是測試環境,另外就是某些不能關機的內部系統,可以用離峰時間累積 CPU credit 之類的應用?

Amazon RDS 宣佈支援 PostgreSQL 10

Amazon RDS 宣佈支援 PostgreSQL 10 了:「PostgreSQL 10 now Supported in Amazon RDS」。而且 AWS 這次推出的還包括了 10.1 的 patch:

As of version 10, PostgreSQL no longer uses three-part version numbers, and is shifting to two-part version numbers. This release includes all patches from the PostgreSQL 10.1 minor version.

10 的第一個版本是去年十月初 (在「PostgreSQL 10 Released」這邊可以看到),10.1 是去年十一月初 (在「PostgreSQL 10.1, 9.6.6, 9.5.10, 9.4.15, 9.3.20, and 9.2.24 released!」),現在二月底,所以延遲大約是三個多月的時間...

10.2 是二月初,不知道會多久...

Trac 1.1 增加的 time 欄位,以及 Due Date 資料的轉移

Trac 的版本玩法跟早期 Linux Kernel 的模式有點像,也就是版號偶數是正式版,奇數是開發版... 雖然現在 Linux Kernel 已經不玩這套了,但 Trac 還是維持這樣的開發方式。

先前一直都是用 Trac 1.0,其中 Due Date 的功能則是用「DateFieldPlugin」這個套件,讓 Trac 支援 date 格式,於是就可以在 [ticket-custom] 裡面指定 Due Date 了:

due_date = text
due_date.date = true
due_date.date_empty = false
due_date.label = Due Date
due_date.value = <now>

在套件的頁面也有提到在 Trac 1.1.1 後就有內建的方式可以用了:

Notice: This plugin is deprecated in Trac 1.2 and later. Custom fields of type ​time were added in Trac 1.1.1.

連結是連到 1.1 的,我要測 1.2 的,所以往現在的版本翻資料,可以看到在 TracTicketsCustomFields 這邊的說明:(這邊就懶的照原來 html 排了,用 pre 直接放縮排)

time: Date and time picker. (Since 1.1.1.)
    label: Descriptive label.
    value: Default date.
    order: Sort order placement.
    format: One of:
        relative for relative dates.
        date for absolute dates.
        datetime for absolute date and time values.

這樣一來設定就會變成:

due_date = time
due_date.format = date
due_date.label = Due Date
due_date.value = now

但底層資料怎麼存?先看 ticket_custom 這個表格的結構,可以看到是 EAV 的架構:

+--------+------------+------+-----+---------+-------+
| Field  | Type       | Null | Key | Default | Extra |
+--------+------------+------+-----+---------+-------+
| ticket | int(11)    | NO   | PRI | NULL    |       |
| name   | mediumtext | NO   | PRI | NULL    |       |
| value  | mediumtext | YES  |     | NULL    |       |
+--------+------------+------+-----+---------+-------+
3 rows in set (0.00 sec)

隨便拉一些可以看出來放法很簡單:

+--------+----------+------------+
| ticket | name     | value      |
+--------+----------+------------+
|      1 | due_date | 2016-10-03 |
+--------+----------+------------+

改成 Trac 1.2 內建的 time 後,塞 2018/02/28 變成:

+--------+----------+--------------------+
| ticket | name     | value              |
+--------+----------+--------------------+
|      1 | due_date | 001519776000000000 |
+--------+----------+--------------------+

拿掉後面的六個 0 後可以看到就是 2018/02/28 了,要注意的是,這邊會受到時區影響,我一開始測試的時候沒調整,寫進去的時間是用伺服器預設的時區計算的。另外也大概能理解前面放兩個 0 的目的,是為了讓 string 比較時的大小就會是數字實際的大小。

$ date --date=@1519776000
Wed Feb 28 00:00:00 UTC 2018

這樣就知道要怎麼做人工轉換了...

用 Percona Monitoring and Management (PMM) 蒐集 PostgreSQL 的數據

難得在 Percona 的 blog 上看到專門談 PostgreSQL 的文章:「Collect PostgreSQL Metrics with Percona Monitoring and Management (PMM)」。

其實是透過 Prometheus 疊出來的:

Starting from PMM 1.4.0. it’s possible to add monitoring for any service supported by Prometheus.

在步驟也可以看到:

3. In the next dialog, choose Prometheus as a data source and continue.

這方法有點奇怪就是了,但反正會動比較重要?XD

DynamoDB 可以透過 KMS 加密了...

AWSDynamoDB 可以透過 KMS 加密了:「New – Encryption at Rest for DynamoDB」。

You simply enable encryption when you create a new table and DynamoDB takes care of the rest. Your data (tables, local secondary indexes, and global secondary indexes) will be encrypted using AES-256 and a service-default AWS Key Management Service (KMS) key.

看起來不是自己的 KMS key,而是 service 本身提供的,這樣看起來是在 i/o level 加密,所以還不是 searchable encryption 的能力...

AWS Tokyo 也有 Amazon Aurora (PostgreSQL) 可以用了

剛剛翻到 AWS 宣佈 Amazon Aurora (PostgreSQL) 在東京開放使用了:「Amazon Aurora with PostgreSQL Compatibility is Available in the Asia Pacific (Tokyo) Region」。

The PostgreSQL-compatible edition of Amazon Aurora is now available in 10 regions. With the addition of the AWS Asia Pacific (Tokyo) region, you have a new option for database placement, availability, and scalability.

不過 Region Table 裡面還沒更新,亞洲區裡面的東京還沒勾起來,應該過幾天就會更新了...

Amazon Aurora (MySQL) 推出相容於 MySQL 5.7 的版本

Amazon Aurora (MySQL) 推出相容於 MySQL 5.7 的版本了:「Amazon Aurora is Compatible with MySQL 5.7」。

不過網站上的介紹還沒更新:

Amazon Aurora is a relational database service that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. The MySQL-compatible edition of Aurora delivers up to 5X the throughput of standard MySQL running on the same hardware, and is designed to be compatible with MySQL 5.6, enabling existing MySQL applications and tools to run without requiring modification.

5.7 其中一個賣點在於支援 Spatial index (透過 R-tree),不過 AWS 的版本看起來是自己用 B-tree 加上 Z-order curve 實做:「Amazon Aurora under the hood: indexing geospatial data using Z-order curves」。

我覺得看看就好啦,拿 244GB RAM 的 r3.8xlarge 跑 1GB 的 data set,當大家是傻逼嗎...

Percona Server 引入 MyRocks

看到「MyRocks Engine: Things to Know Before You Start」這篇,才發現原來一月的時候 Percona Server 就已經將 MyRocks GA (General Availability) 了:「Percona Server for MySQL 5.7.20-19 Is Now Available」。

New Features:
Now MyRocks Storage Engine has General Availability status.

在二月這篇文章裡面有提到一些重點,像是安裝方式:

Now if you use Percona repositories, you can simply install MyRocks plugin and enable it with ps-admin --enable-rocksdb.

另外文章裡也提到了重要的差異 (在「What other differences should you be aware of?」這段),像是他並不是每個 table 都一個檔案,而是像早期 InnoDB 的作法,整個一包:

Let’s look at the directory layout. Right now, all tables and all databases are stored in a hidden .rocksdb directory inside mysqldir. The name and location can be changed, but still all tables from all databases are stored in just a series of .sst files. There is no per-table / per-database separation.

另外提到可以看到 Engine 的代碼是 ROCKSDB (從 ENGINE=ROCKSDB 那段看到的)。然後是 Isolation level 的支援度,只有 READ-COMMITTEDSERIALIZABLE,沒有 REPEATABLE-READ

Keep in mind that at this time MyRocks supports only READ-COMMITTED and SERIALIZABLE isolation levels. There is no REPEATABLE-READ isolation level and no gap locking like in InnoDB. In theory, RocksDB should support SNAPSHOT isolation level. However, there is no notion of SNAPSHOT isolation in MySQL so we have not implemented the special syntax to support this level. Please let us know if you would be interested in this.

然後 bulk load 在資料量超過記憶體大小時會有已知的 crash 問題:

For bulk loads, you may face problems trying to load large amounts of data into MyRocks (and unfortunately this might be the very first operation when you start playing with MyRocks as you try to LOAD DATA, INSERT INTO myrocks_table SELECT * FROM innodb_table or ALTER TABLE innodb_table ENGINE=ROCKSDB). If your table is big enough and you do not have enough memory, RocksDB crashes. As a workaround, you should set rocksdb_bulk_load=1 for the session where you load data.

然後目前沒有像 XtraBackup 的工具可以用,現階段如果要備份的話得透過傳統的方式來做 (mysqldump 或是 filesystem snapshot):

Right now there is no hot backup software like Percona XtraBackup to perform a hot backup of MyRocks tables (we are looking into this). At this time you can use mysqldump for logical backups, or use filesystem-level snapshots like LVM or ZFS.

想來找機會測試看看兩者差異...

Amazon Aurora (MySQL) 提供 Parallel Query 讓人申請使用

AWS 宣佈了 Amazon Aurora (MySQL) 支援 Parallel Query:「Amazon Aurora Parallel Query is Available for Preview」。

這邊提到的 Parallel Query 比較像是 Amazon Athena,直接把單一 Query 打散到多台機器上跑:

Amazon Aurora Parallel Query improves the performance of large analytic queries by pushing processing down to the Aurora storage layer, spreading processing across hundreds of nodes.

也就是說,這算是單一 SQL Query 平行運算的進階版本。

在這之前,AWS 都已經支援單一 Query 在單台機器上利用多 CPU 平行運算。其中 PostgreSQL 是 9.6+ 本身就有支援。Amazon Aurora (MySQL) 則是在 2016 時透過 Parallel Read Ahead 支援某些情境下的的單一 Query 多 CPU 運算了 (發現之前沒寫到...):「Amazon Aurora Update – Parallel Read Ahead, Faster Indexing, NUMA Awareness」。

這個功能目前是 Preview 階段,然後開在這些地區讓大家測試使用:

The preview is available for the MySQL-compatible edition of Amazon Aurora, and is currently available in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions. Sign up to get access.

這個功能提供了想要提昇效能,但懶得改架構的人可以用錢直接硬換出來...

Archives