Google 新推出的 Lyra audio codec

Hacker News Daily 上看到「Lyra audio codec enables high-quality voice calls at 3 kbps bitrate」,講 Google 新推出的 Lyra audio codec:「Lyra: A New Very Low-Bitrate Codec for Speech Compression」,論文在「Generative Speech Coding with Predictive Variance Regularization」這邊可以抓到。

目前 Google 提出來的想法是想辦法在 56kbps 的頻寬下實現還堪用的視訊通話:

Pairing Lyra with new video compression technologies, like AV1, will allow video chats to take place, even for users connecting to the internet via a 56kbps dial-in modem.

這次的突破在於可以使用 3kbps 的頻寬傳輸,但清晰度比 Opus 的 6kbps 效果還好不少。

Google 在文章裡面給了兩個 sample,一個是乾淨背景音,另外一個是吵雜的背景音,跟 Opus 與 Speex 比起來都好很多。

論文是說不需要太高的運算力,但沒翻到 GitHub 之類的 source code,先當作參考:

We provide extensive subjective performance evaluations that show that our system based on generative modeling provides state-of-the-art coding performance at 3 kb/s for real-world speech signals at reasonable computational complexity.

Eventbrite 的 MySQL 升級計畫

在 2021 年看到 EventbiteMySQL 升級計畫:「MySQL High Availability at Eventbrite」。

看起來是 2019 年年初的時候 MySQL 5.1 出問題,後續決定安排升級,在 2019 年年中把系統升級到 MySQL 5.7 (Percona Server 版本):

Our first major hurdle was to get current with our version of MySQL. In July, 2019 we completed the MySQL 5.1 to MySQL 5.7 (v5.7.19-17-log Percona Server to be precise) upgrade across all MySQL instances.

然後看起來是直接在 EC2 上跑,不過這邊提到的空間問題就不太確定了,是真的把 EBS 的空間上限用完嗎?比較常使用的 gp2gp3 上限都是 16TB,不確定是不是真的用到接近爆掉了:

Not only was support for MySQL 5.1 at End-of-Life (more than 5 years ago) but our MySQL 5.1 instances on EC2/AWS had limited storage and we were scheduled to run out of space at the end of July. Our backs were up against the wall and we had to deliver!

另外在升級到 5.7 的時候,順便把本來是 INT 的 primary key 都換成 BIGINT

As part of the cut-over to MySQL 5.7, we also took the opportunity to bake in a number of improvements. We converted all primary key columns from INT to BIGINT to prevent hitting MAX value.

然後系統因為舊版的 Django 沒辦法配合 MySQL 5.7,得升級到 Django 1.6 (要注意 Django 1 系列的最新版是 1.11,看起來光是升級到 1.6 勉強會動就升不上去了?):

In parallel with the MySQL 5.7 upgrade we also Upgraded Django to 1.6 due a behavioral change in MySQL 5.7 related to how transactions/commits were handled for SELECT statements. This behavior change was resulting in errors with older version of Python/Django running on MySQL 5.7

然後採用了 GitHub 家研發的 gh-ost 當作改變 schema 的工具:

In December 2019, the Eventbrite DBRE successfully implemented a table ALTER via gh-ost on one of our larger MySQL tables.

看起來主要的原因是有遇到 pt-online-schema-change 的限制 (在「GitHub 發展出來的 ALTER TABLE 方式」這邊有提到):

Eventbrite had traditionally used pt-online-schema-change (pt-osc) to ALTER MySQL tables in production. pt-osc uses MySQL triggers to move data from the original to the “duplicate” table which is a very expensive operation and can cause replication lag. Matter of fact, it had directly resulted in several outages in H1 of 2019 due to replication lag or breakage.

另外一個引入的技術是 Orchestrator,看起來是先跟 HAProxy 搭配,不過他們打算要再換到 ProxySQL

Next on the list was implementing improvements to MySQL high availability and automatic failover using Orchestrator. In February of 2020 we implemented a new HAProxy layer in front of all DB clusters and we released Orchestrator to production!

Orchestrator can successfully detect the primary failure and promote a new primary. The goal was to implement Orchestrator with HAProxy first and then eventually move to Orchestrator with ProxySQL.

然後最後題到了 Square 研發的 Shift,把 gh-ost 包裝起來變成有個 web UI 可以操作:

2021 還可以看到這類文章還蠻有趣的...

EBS io1 推出可以同時掛到多台的選項

EBS 的 io1 推出了可以同時掛到 16 台 EC2 instance 的選項:「New – Multi-Attach for Provisioned IOPS (io1) Amazon EBS Volumes」。

先看支援的區域,傳統主力區域 (us-east-1 與 eu-west-1) 都支援了,而亞洲區這邊反倒是南韓先支援了:

Multi-Attach for Provisioned IOPS (io1) volumes on Amazon Elastic Block Store (EBS) is available today at no extra charge to customers in the US East (N. Virginia & Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Seoul) regions.

其中常用的目的是 HA:

Multi-Attach capability makes it easier to achieve higher availability for applications that provide write ordering to maintain storage consistency.

Heartbeat 類的應用應該可以用上這個東西,不過本來就可以透過 command line API 做到 detach & attach,用這個只是少了一個動作...

第二個想到的是,在實體機房的環境下,有些 filesystem (在「Shared-disk file systems」裡面可以翻到一些) 可以同時掛同一個 block storage (通常是透過 SAN),現在在 AWS 上面也可以這樣搞了。

不過 io1 記得不便宜啊...

Amazon RDS 推出了 Connection Pool 的產品

Amazon RDS 推出了 Connection Pool 的產品,叫做 Amazon RDS Proxy:「Introducing Amazon RDS Proxy (Preview)」。

目前支援 MySQL (包括了傳統的與 Aurora 版本的):

Amazon RDS Proxy supports Amazon RDS for MySQL and Amazon Aurora with MySQL compatibility, with support for additional RDS database engines coming soon.

定價策略看起來是依照後端資料庫的 vCPU 計算:

Pricing is simple and predictable: you pay per vCPU of the database instance for which the proxy is enabled.

翻了一下價錢頁是 USD$0.015/vCPU (用 us-east-1 的資料),而如果是 t2 系列的機器,最低是以 2 vCPUs 計算,不是照使用比例算:

RDS Proxy pricing correlates to the number of vCPUs of the database instance for which it is enabled, with a minimum charge for 2 vCPUs.

這樣一個 vCPU 一個月大約要 USD$21.6,算起來頗貴的... 如果 SLA 允許的話,用基本的方式 failover 也許就 ok 了...

如果 SLA 真的要追求到這麼高的話,可以在這些區域測試:

Amazon RDS Proxy is available in preview for RDS MySQL and Aurora MySQL in US East (N. Virginia), US East (Ohio), US West (Oregon), EU West (Ireland), and Asia Pacific (Tokyo) regions. Support for RDS PostgreSQL and Aurora PostgreSQL is coming soon.

Canonical 推出的 Dqlite (High-Availability SQLite)

第一眼看到的時候直接有種不知道 Canonical 在幹什麼的感覺,翻完說明後大概知道可以用的地方,但還是覺得範圍有點小:「Dqlite - High-Availability SQLite」。

一種使用情境是,在 embedded system 上面同步資料的一種方案... 吧?例如網路連線的頻寬或是品質受限,無法順利傳到 Internet 端的伺服器上,所以希望在本地端就可以解決一些事情,但又不方便在本地端直接弄個 PostgreSQL 出來?

Dqlite is a fast, embedded, persistent SQL database with Raft consensus that is perfect for fault-tolerant IoT and Edge devices.

另外一個是用到了 C 實做的 Raft 協定:

Dqlite (“distributed SQLite”) extends SQLite across a cluster of machines, with automatic failover and high-availability to keep your application running. It uses C-Raft, an optimised Raft implementation in C, to gain high-performance transactional consensus and fault tolerance while preserving SQlite’s outstanding efficiency and tiny footprint.

讓 IoT 裝置參與 Raft 嗎... 好像只能說有趣... XD

加快 ls 的速度

看到「When setting an environment variable gives you a 40x speedup」這篇在講 ls 的速度。

文章是由 StanfordSherlock 發出來的,不過看起來跟電視劇沒關係,從網站上的標語「The HPC cluster for all your computing needs」可以看出是 HPC 相關的單位。

在 HPC 環境裡面可以預期單一目錄裡會有很多檔案,所以使用者跑來抱怨 ls 的速度就不算太意外了。不過這次使用者有提到在他自己的 laptop 上跑 ls 反而很快:

It all started from a support question, from a user reporting a usability problem with ls taking several minutes to list the contents of a 15,000+ entries directory on $SCRATCH.

Having thousands of files in a single directory is usually not very file system-friendly, and definitely not recommended. The user knew this already and admitted that wasn’t great, but when he mentioned his laptop was 1,000x faster than Sherlock to list this directory’s contents, of course, it stung. So we looked deeper.

直接跳到後面的結論... 原因是出自於因為需要顯示不同顏色,而需要透過 lstat() 查詢額外的檔案性質 (可執行、setuid 以及 setgid 這些資料),導致速度變慢:

From 13s with the default settings, to 0.3s with a small LS_COLORS tweak, that’s a 40x speedup right there, for the cheap price of not having setuid/setgid or executable files colorized differently.

Of course, this is now setup on Sherlock, for every user’s benefit.

透過設定 LS_COLORS='ex=00:su=00:sg=00:ca=00:',可以讓 lstat() 消失,所以被放進 Sherlock 的預設值了... 而沒有遇到這個問題的環境 (像是有設計好對應的目錄結構),或是想要維持原來的樣子的人,則可以 unset 掉這個值讓輸出還是有色彩差異 :o

JPMorgan Chase 的 WePay 用的 MySQL 架構

看到「Highly Available MySQL Clusters at WePay」這篇講 WePayMySQL 的設計,本來以為是 WeChat 的服務,仔細看查了之後發現原來是 JPMorgan Chase 的服務...

架構在 GCP 上面,本來的 MySQL 是使用 MHA + HAProxy (patch 過的版本,允許動態改變 pool),然後用 Routes 處理 HAProxy 的 failover。

他們遇到的問題是 crash failover 需要至少 30 分鐘的切換時間,另外就是在 GCP 上面跨區時會有的 network partition 問題...

後續架構變得更複雜,讓人懷疑真的有解決問題嗎 XDDD

改用 GitHub 推出的 Orchestrator 架構,然後用兩層 HAProxy 導流 (一層放在 client side,另外一層是原來架構裡面的 load balancer),在加上用 Consul 更新 HAProxy 的資訊?

思考為什麼會有這樣設計 (考慮到金融體系的背景),其實還蠻有趣的...

在 MySQL 上遇到 Replication Lag 的解法

看到 Percona 的 blog 上寫了一篇 MySQL 遇到 replication lag 時要怎麼解決:「MySQL High Availability: Stale Reads and How to Fix Them」,另外在留言也有人提到 Booking.com 的解法:「How Booking.com avoids and deals with replication lag」。

在業務成長到單台 MySQL server 不夠用的情況下,最簡單的擴充方式是架設 slave server,然後把應用程式裡讀取的部份導到 slave 上 (也就是 R/W split),但因為 MySQL 的 replication 是非同步的,所以有可能會發生在 master 寫入資料後 slave 還讀不到剛剛寫的資料,也就是 replication lag。

這就大概有幾種作法,一種是當發現 lag 時就回 master 讀,但通常這都會造成 master 過載... 所以另外一種改善的作法是發現 lag 時就換其他 slave 看看,但這個方法就不保證讀的到東西,因為有可能所有的 slave 都 lag。

以前遇到的時候是拆情境,預設還是 R/W split,但敏感性的資料處理以及金流相關的資料就全部都走 master。

不過文章裡的解法更一般性,在寫入時多寫一份資料,然後在 slave 等這組資料出現。唯一的缺點就是要 GC 把多寫的資料清掉...

同樣的想法,其實可以讓 MySQL 在 commit 時直接提供給 binlog 或 GTID 的資訊,然後在 slave 等待這組 binlog 或 GTID 被執行。

看起來算是很不錯的解法,不知道各家 framework 對這些方式的支援度如何...

Amazon Aurora Global Database

AWSAurora (MySQL) 推出 Amazon Aurora Global Database:「Announcing Amazon Aurora Global Database」。

看起來不是 multi-master (從 secondary region 這個字看),所以寫入的部分還是得送回 primary region 處理:

Aurora Global Database uses storage-based replication with typical latency of less than 1 second, using dedicated infrastructure that leaves your database fully available to serve application workloads. In the unlikely event of a regional degradation or outage, one of the secondary regions can be promoted to full read/write capabilities in less than 1 minute.

應該是單一 endpoint 幫你處理這些雜事...

CloudFront 十週年、在東京的第十個 PoP,以及支援 WebSocket 與 Origin Failover

CloudFront 宣佈十年了,另外這次在東京又加節點了,變成 10 個:「Celebrating the 10 year anniversary of Amazon CloudFront by launching six new Edge locations」。

另外宣佈支援 WebSocket:「Amazon CloudFront announces support for the WebSocket protocol」,以及支援 Origin Failover:「Amazon CloudFront announces support for Origin Failover」。

WebSocket 算是大家等蠻久的功能,大家主要是希望利用 CloudFront 擋 DDoS,正常流量才往後丟。而 Origin Failover 則是可以設定兩個 Origin,當主要的掛掉時可以切到備用的,對於支援多節點的架構算是第一步 (目前看起來只能設兩個):

With CloudFront’s Origin Failover capability, you can setup two origins for your distributions - primary and secondary, such that your content is served from your secondary origin if CloudFront detects that your primary origin is unavailable.

都是隔壁棚 Cloudflare 支援許久的功能... 算是補產品線與進度?