Cassandra 也被 AWS 包成服務了

也是剛剛發表的服務 (所以在 Twitter 上看到),把 Apache Cassandra 包成服務,叫做 Amazon Managed Apache Cassandra Service:「New – Amazon Managed Apache Cassandra Service (MCS)」。

而且是個 serverless 服務,直接用服務,不需要管理機器:

Amazon MCS is serverless, so you pay for only the resources you use and the service automatically scales tables up and down in response to application traffic.

從計費的方式也可以看出來這點,是對 Write request units、Read request units 與 Storage 收費,沒有看到機器的費用。

不過稍微算了一下不算便宜,如果沒有用到 Cassandra 的特性的話,比 DynamoDB 貴一些?

目前是 open preview 狀態,是個可以用但是不掛保證的意思:

Amazon MCS is available today in open preview in US East (N. Virginia), US East (Ohio), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Tokyo).

算是多個選擇可以玩...

Amazon Aurora 可以直接使用 AWS 的 Machine Learning 服務

AWS 宣佈了 Amazon Aurora 可以直接使用 AWS 自家的 Machine Learning 服務:「New for Amazon Aurora – Use Machine Learning Directly From Your Databases」。

整合了兩個服務,分別是 Amazon SageMaker (各類的模型) 以及 Amazon Comprehend (文字處理相關)。

目前只有 Amazon Aurora MySQL 5.7 的版本有支援,其他的還在做:

The new machine learning integration is available today for Aurora MySQL 5.7, with the SageMaker integration generally available and the Comprehend integration in preview. You can learn more in the documentation. We are working on other engines and versions: Aurora MySQL 5.6 and Aurora PostgreSQL 10 and 11 are coming soon.

這個整合讓程式用起來更方便了...

Amazon Aurora MySQL 5.7 也可以上 Global Database 了

AWSAmazon Aurora MySQL 5.7 版本推出了 Amazon Aurora Global Database:「Aurora Global Database is Now Supported on Amazon Aurora MySQL 5.7」。

看起來 MySQL 系的 Global Database 就是跨區的 master-slave 架構 (所以標榜降低了 read latency,但沒有提到 write latency):

An Amazon Aurora Global Database is a single database that spans multiple AWS regions, enabling low latency global reads and disaster recovery from region-wide outages.

另外可以看到是 1 秒,所以應該是 async replication:

Aurora Global Database replicates writes in the primary region with typical latency of <1 second to secondary regions, for low latency global reads.

然後可以跨區切換:

In disaster recovery situations, you can promote the secondary region to take full read-write responsibilities in under a minute.

看了一下好像不用多付服務費用,就是各區自己的費用,加上傳輸的費用而已,看起來是個還不錯的服務?

Amazon Redshift 會自動在背景重新排序資料以增加效能

Amazon Redshift 的新功能,會自動在背景重新排序資料以增加效能:「Amazon Redshift introduces Automatic Table Sort, an automated alternative to Vacuum Sort」。

版本要到更新到 1.0.11118,然後預設就會打開:

This feature is available in Redshift 1.0.11118 and later.

Automatic table sort is now enabled by default on Redshift tables where a sort key is specified.

重新排序的運算會在背景處理,另外帶一些行為學習分析:

Redshift runs the sorting in the background and re-organizes the data in tables to maintain sort order and provide optimal performance. This operation does not interrupt query processing and reduces the compute resources required by operating only on frequently accessed blocks of data. It prioritizes which blocks of table to sort by analyzing query patterns using machine learning.

算是丟著讓他跑就好的東西,升級上去後可以看一下 CloudWatch 的報告,這邊沒有特別講應該是還好... XD

Amazon Redshift 可以處理座標資料了

這一個月 AWS 因為舉辦一年一度的 AWS re:Invent,會開始陸陸續續放出各種消息... 這次是 Amazon Redshift 宣佈支援 spatial data,這樣一來就能夠方便的處理座標資料了:「Using Spatial Data with Amazon Redshift」。

支援的種類與使用的限制可以在官方的文件裡面看到,也就是「Querying Spatial Data in Amazon Redshift」與「Limitations When Using Spatial Data with Amazon Redshift」這兩篇。

其實限制還蠻多的,而其中這條看起來有點痛,擴充性受到限制:

Data types for Python user-defined functions (UDFs) don't support the GEOMETRY data type.

看起來暫時是先支援計算的部份,之後看看有沒有機會變成原生型態的支援,這樣使用起來才會比較順...

PostgreSQL 上去識別化的套件

在「PostgreSQL Anonymizer 0.5: Generalization and k-anonymity」這邊看到的套件,看起來可以做到一些常見而且簡單的去識別化功能:

The extension supports 3 different anonymization strategies: Dynamic Masking, In-Place Anonymization and Anonymous Dumps. It also offers a large choice of Masking Functions: Substitution, Randomization, Faking, Partial Scrambling, Shuffling, Noise Addition and Generalization.

看起來可以把欄位轉成 range 這件事情半自動化處理掉 (還是需要 SQL 本身呼叫這些函數),之後遇到 PII 的時候也許會用到...

把 PostgreSQL 的 EXPLAIN 轉成 Flamegraph

Hacker News Daily 上看到 mgartner/pg_flame 這個專案,可以把 PostgreSQLEXPLAIN 結果 (JSON 格式) 轉成 Flamegraph (用 HTML 呈現):

不過我是直接看 EXPLAIN 的輸出比較習慣... 但如果需要做投影片的時候,應該是個好工具?

Amazon 又把一個大部門的 Oracle 系統轉移到了 AWS 自家的系統

算是 AWS 的 PR 稿,在老闆對雲的宣示與政治正確下本來就會陸陸續續轉過去...

這次是 Amazon 的 Consumer Business 從 Oracle 的系統換到 AWS 自己的系統:「Migration Complete – Amazon’s Consumer Business Just Turned off its Final Oracle Database」。

原先有 75 PB 的資料與 7500 個 database:

We migrated 75 petabytes of internal data stored in nearly 7,500 Oracle databases to multiple AWS database services including Amazon DynamoDB, Amazon Aurora, Amazon Relational Database Service (RDS), and Amazon Redshift.

其中一個優點是省成本,但是也投入了超過一百個團隊一起參與轉移,會需要攤多久才會打平,這點在沒有看到內部財務資料其實沒辦法判斷,而且工程資源的稀缺性也是個沒有被看到的資訊:

Cost Reduction – We reduced our database costs by over 60% on top of the heavily discounted rate we negotiated based on our scale. Customers regularly report cost savings of 90% by switching from Oracle to AWS.

More than 100 teams in Amazon’s Consumer business participated in the migration effort.

然後 latency 的下降其實也只能參考,因為轉移系統的時候也會順便改寫,有多少是因為 AWS 服務本身帶出來,在沒有內部資料看不出來:

Performance Improvements – Latency of our consumer-facing applications was reduced by 40%.

管理成本算是裡面唯一可以參考的,畢竟是搬到可延展擴充的服務:

Administrative Overhead – The switch to managed services reduced database admin overhead by 70%.

另外,沒寫的東西比較有趣,像是他們沒有選擇 Athena 而是用 Redshift,看起來像是先轉上去,其他找機會再說...

AWS 上用空間買 IOPS 的故事...

在「A web performance issue」這邊講到 Mozilla 的系統產生效能問題,後續的 trouble shooting 以及解決問題的方案。

這個系統跑在 AWS 上,在一連串確認後發現是 RDS 所使用的 EBS 的 IOPS 滿了:

After reading a lot of documentation about Amazon’s RDS set-up I determined that slow downs in the database were related to IOPS spikes. Amazon gives you 3 IOPS per Gb and with a storage of 1 Terabyte we had 3,000 IOPS as our baseline. The graph below shows that at times we would get above that max baseline.

然後大家對於解法都差不多,因為 Provisioned IOPS 太貴,所以直接加大空間換 IOPS 出來 (因為 General SSD 裡 1 GB 給 3 IOPS):

To increase the IOPS baseline we could either increase the storage size or switch from General SSD to Provisioned IOPS storage. The cost of the different storage type was much higher so we decided to double our storage, thus, doubling our IOPS baseline. You can see in the graph below that we’re constantly above our previous baseline. This change helped Treeherder’s performance a lot.

然後再設警告機制,下次就可以提前再拉昇:

In order to prevent getting into such a state in the future, I also created a CloudWatch alert. We would get alerted if the combined IOPS is greater than 5,700 IOPS for 6 datapoints within 10 minutes.

不過 General SSD 的 IOPS 是沒有 100% 保證的,只有這樣寫:

AWS designs gp2 volumes to deliver 90% of the provisioned performance 99% of the time.

大多數的情況應該是夠用啦...

Canonical 推出的 Dqlite (High-Availability SQLite)

第一眼看到的時候直接有種不知道 Canonical 在幹什麼的感覺,翻完說明後大概知道可以用的地方,但還是覺得範圍有點小:「Dqlite - High-Availability SQLite」。

一種使用情境是,在 embedded system 上面同步資料的一種方案... 吧?例如網路連線的頻寬或是品質受限,無法順利傳到 Internet 端的伺服器上,所以希望在本地端就可以解決一些事情,但又不方便在本地端直接弄個 PostgreSQL 出來?

Dqlite is a fast, embedded, persistent SQL database with Raft consensus that is perfect for fault-tolerant IoT and Edge devices.

另外一個是用到了 C 實做的 Raft 協定:

Dqlite (“distributed SQLite”) extends SQLite across a cluster of machines, with automatic failover and high-availability to keep your application running. It uses C-Raft, an optimised Raft implementation in C, to gain high-performance transactional consensus and fault tolerance while preserving SQlite’s outstanding efficiency and tiny footprint.

讓 IoT 裝置參與 Raft 嗎... 好像只能說有趣... XD