Amazon Web Services (AWS) announced plans to open a new AWS Region in Taiwan by early 2025. The AWS Asia Pacific (Taipei) Region will consist of three Availability Zones at launch.
剛好也看到有人在討論機房是不是都會在雙北,找了一下 AZ 的規範,發現現在有寫出來一些白紙黑字的東西 (記得以前沒有):「Availability Zones」。
Availability Zones in a Region are meaningfully distant from each other, up to 60 miles (~100 km) to prevent correlated failures, but close enough to use synchronous replication with single-digit millisecond latency.
The release is the result of many, many months of effort to move the legacy Presto Gateway to Trino, start a refactor of the project, and add numerous new features.
NOTE: This is a legacy version of Trino Gateway. Please refer to https://github.com/trinodb/trino-gateway for active development and updates moving forward.
Data Transfer IN To Amazon EC2 From Internet
All data transfer in $0.00 per GB
但從 AZ 進到另外一個 AZ 時,in 與 out 都要收費:
Data transferred "in" to and "out" from Amazon EC2, Amazon RDS, Amazon Redshift, Amazon DynamoDB Accelerator (DAX), and Amazon ElastiCache instances, Elastic Network Interfaces or VPC Peering connections across Availability Zones in the same AWS Region is charged at $0.01/GB in each direction.
所以直接用 US$0.01/GB 的計算是不夠的,得用 US$0.02/GB 來計算。
同樣的,如果是 Public IP 與 Elastic IP 也都是雙向收費,跨 VPC 也是雙向收費,所以都要用 US$0.02/GB 來算:
IPv4: Data transferred “in” to and “out” from public or Elastic IPv4 address is charged at $0.01/GB in each direction.
IPv6: Data transferred “in” to and “out” from an IPv6 address in a different VPC is charged at $0.01/GB in each direction.
要注意的是不同帳號的 a 不一定相同 (像是 us-east-1a 在不同帳號對應到的實際 AZ 不同),得透過 AWS 提供的資料確認底層實際的 AZ 是哪個。
回朔到這個月月初生效:
Starting May 1st 2021, all data transfer over a VPC Peering connection that stays within an Availability Zone (AZ) is now free. All data transfer over a VPC Peering connection that crosses Availability Zones will continue to be charged at the standard in-region data transfer rates. You can use the Availability Zone-ID to uniquely and consistently identify an Availability Zone across different AWS accounts.
然後是 EC2 的 Pricing 頁面,上面還是寫 Asia Pacific (Osaka-Local),無法確定是新資料還是舊資料,但以往的慣例應該是更新了...
對照文章裡有提到支援的機器,目前看起來還沒有很齊,像是目前都還沒有 AMD 與 ARM 架構的機器,另外也沒有 GPU 類型的機器:
The Asia Pacific (Osaka) Region supports the C5, C5d, D2, I3, I3en, M5, M5d, R5d, and T3 instance types, in On-Demand, Spot, and Reserved Instance form. X1 and X1e instances are available in a single AZ.
看起來是 2019 年年初的時候 MySQL 5.1 出問題,後續決定安排升級,在 2019 年年中把系統升級到 MySQL 5.7 (Percona Server 版本):
Our first major hurdle was to get current with our version of MySQL. In July, 2019 we completed the MySQL 5.1 to MySQL 5.7 (v5.7.19-17-log Percona Server to be precise) upgrade across all MySQL instances.
Not only was support for MySQL 5.1 at End-of-Life (more than 5 years ago) but our MySQL 5.1 instances on EC2/AWS had limited storage and we were scheduled to run out of space at the end of July. Our backs were up against the wall and we had to deliver!
另外在升級到 5.7 的時候,順便把本來是 INT 的 primary key 都換成 BIGINT:
As part of the cut-over to MySQL 5.7, we also took the opportunity to bake in a number of improvements. We converted all primary key columns from INT to BIGINT to prevent hitting MAX value.
In parallel with the MySQL 5.7 upgrade we also Upgraded Django to 1.6 due a behavioral change in MySQL 5.7 related to how transactions/commits were handled for SELECT statements. This behavior change was resulting in errors with older version of Python/Django running on MySQL 5.7
Eventbrite had traditionally used pt-online-schema-change (pt-osc) to ALTER MySQL tables in production. pt-osc uses MySQL triggers to move data from the original to the “duplicate” table which is a very expensive operation and can cause replication lag. Matter of fact, it had directly resulted in several outages in H1 of 2019 due to replication lag or breakage.
Next on the list was implementing improvements to MySQL high availability and automatic failover using Orchestrator. In February of 2020 we implemented a new HAProxy layer in front of all DB clusters and we released Orchestrator to production!
Orchestrator can successfully detect the primary failure and promote a new primary. The goal was to implement Orchestrator with HAProxy first and then eventually move to Orchestrator with ProxySQL.
然後最後題到了 Square 研發的 Shift,把 gh-ost 包裝起來變成有個 web UI 可以操作: