Home » Posts tagged "write"

Cloudflare 的 Workers KV

Cloudflare 推出了 Workers KV 服務:「Building With Workers KV, a Fast Distributed Key-Value Store」。

是個 key-value 結構服務 (全球性,eventually consistent,約 10 秒的同步時間),key 的限制是 2KB,value 是 64KB,一個 namespace 最多 10 億筆資料。

讀取可以到 100k+ read/s,但寫入是 1 write/s/key,可以看出來主要是為了讀取資料而設計的。

現有的 worker ($5/month) 會送一些量,包括 1GB 的空間與一千萬次的讀取:

Your $5 monthly Workers compute minimum includes 1 GB of KV storage and up to 10 million KV reads. If you use less than the 10 million included Worker requests now, you can use KV without paying a single cent more.

超過的部份另外再收:

Beyond the minimums, Workers KV is billed at $0.50 per GB-month of additional storage and $0.50 per million additional KV reads.

這樣好像整個 blog 的基本功能都可以直接在上面跑了... 而搜尋靠外部服務,圖片與影音也可以靠外部空間協助?

Amazon S3 提供更高的存取量...

AWS 宣佈提高了 Amazon S3 的效能:「Amazon S3 Announces Increased Request Rate Performance」。

每個 S3 prefix 都可以到 5500 RPS read 與 3500 RPS write:

Amazon S3 now provides increased performance to support up to 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge. Each S3 prefix can support these request rates, making it simple to increase performance exponentially.

舊的資料可以看「Request Rate and Performance Considerations」這邊,裡面沒有明講速度,但有提到如果超過 800 RPS read 與 300 RPS write 的門檻,建議開 case:

However, if you expect a rapid increase in the request rate for a bucket to more than 300 PUT/LIST/DELETE requests per second or more than 800 GET requests per second, we recommend that you open a support case to prepare for the workload and avoid any temporary limits on your request rate.

不過如果有量的話,還是建議照著原來的 prefix 建議,打散處理會比較好,通常在前面的 CDN 通常可以跑簡單的 url rewrite 處理掉 (像是 CloudFront 自家或是 Cloudflare),像是把使用 unix timestamp (ms) 的 https://www.example.com/1531843366123.jpg 變成 https://www.example.com/6123/1531843366123.jpg,這樣可以讓 Amazon S3 的後端依照 prefix 打散 loading,避免當站愈來愈大的時候很難處理。

在 Amazon Aurora 利用 ProxySQL 的讀寫分離提昇效能

Percona 的「Leveraging ProxySQL with AWS Aurora to Improve Performance, Or How ProxySQL Out-performs Native Aurora Cluster Endpoints」這篇有夠長的,其實就是發現 AWSAmazon Aurora 只使用 Cluster Endpoint 無法壓榨出所有效能,只有當你讀寫分離拆開 Cluster endpoint 與 Reader endpoint 時才能提昇效能。主要是在推銷 ProxySQL 啦,其他的軟體應該也能達到類似的效果...

然後這張怪怪的,應該是 copy & paste 上去的關係?

因為事後再疊 ProxySQL 進去不會太困難,一般還是建議先直接用服務本身提供的 endpoint (少了一層要維護的設備),等到有遇到效能問題時再來看是卡在哪邊,如果是 R/W split 可以解決的,才用 ProxySQL 或是其他軟體來解...

Amazon DynamoDB 的 Point-In-Time Recovery

Amazon DynamoDB 在 3/26 發出來的功能,以秒為單位的備份與還原機制:「New – Amazon DynamoDB Continuous Backups and Point-In-Time Recovery (PITR)」。

先打開這個功能:

打開後就會開始記錄,最多可以還原 35 天內的任何一個時間點的資料:

DynamoDB can back up your data with per-second granularity and restore to any single second from the time PITR was enabled up to the prior 35 days.

這時候就算改變資料或是刪除資料,實際上在系統內都是 Copy-on-write 操作,所以需要另外的空間,這部份會另外計價:

Pricing for continuous backups is detailed on the DynamoDB Pricing Pages. Pricing varies by region and is based on the current size of the table and indexes. For example, in US East (N. Virginia) you pay $0.20 per GB based on the size of the data and all local secondary indexes.

有這樣的功能通常是一開始設計時就有考慮 (讓底層的資料結構可以很方便的達成這樣的效果),現在只是把功能實作出來... 像 MySQL 之類的軟體就沒辦法弄成這樣 XDDD

最後有提到支援的地區,是用條列的而不是說所有有 Amazon DynamoDB 的區域都支援:

PITR is available in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), and South America (Sao Paulo) Regions starting today.

比對一下,應該是巴黎與美國政府用的區域沒進去... 一個是去年年底開幕的區域,另一個是本來上新功能就偏慢的區域。

Instagram 解決 Cassandra 效能問題的方法

在解決 Cassandra 效能問題中大概就 ScyllaDB 特別有名,用 C++ 重寫一次使得效能大幅改善。而 Instagram 的人則是把底層的資料結構換掉,改用 RocksDB (這公司真的很愛自家的 RocksDB...):「Open-sourcing a 10x reduction in Apache Cassandra tail latency」。

主要原因是他們發現 Cassandra 在處理資料的部份會有 JVM 的 GC 問題,而且是導致 Cassandra 效能差的主要原因:

Apache Cassandra is a distributed database with it’s own LSM tree-based storage engine written in Java. We found that the components in the storage engine, like memtable, compaction, read/write path, etc., created a lot of objects in the Java heap and generated a lot of overhead to JVM.

然後在換完後測試可以看到效能大幅提昇,也可以看到 GC 的延遲大幅降低:

In one of our production clusters, the P99 read latency dropped from 60ms to 20ms. We also observed that the GC stalls on that cluster dropped from 2.5% to 0.3%, which was a 10X reduction!

比較一下這兩者的差異:在 ScyllaDB 是全部都用 C++ 改寫 (資料結構不換),這樣就直接解決掉 JVM 的 GC 問題。在 Rocksandra 則是在 profiling 後挑重點換掉 (這邊看起來是處理資料的 code,直接換成 RocksDB),另外順便把一些界面抽象化... 兩個不一樣的解法,都解決了 JVM 的 GC 問題。

Amazon Aurora 的 Serverless 與 Multi-master

Amazon Aurora 推出了兩包玩意,第一包是 Serverless,讓需要人介入的情況更少:「In The Works – Amazon Aurora Serverless」。

在 Serverless 的第一個重點是支援以秒計費:

Today we are launching a preview (sign up now) of Amazon Aurora Serverless. Designed for workloads that are highly variable and subject to rapid change, this new configuration allows you to pay for the database resources you use, on a second-by-second basis.

然後是極為快速的 auto-scaling:

The endpoint is a simple proxy that routes your queries to a rapidly scaled fleet of database resources. This allows your connections to remain intact even as scaling operations take place behind the scenes. Scaling is rapid, with new resources coming online within 5 seconds

這兩個組合起來,讓使用端可以除了在 Amazon EC2 上可以快速 scale 外,後端的資料庫也能 scale 了...

第二個是 Multi-master 架構:「Sign Up for the Preview of Amazon Aurora Multi-Master」。

Amazon Aurora Multi-Master allows you to create multiple read/write master instances across multiple Availability Zones. This enables applications to read and write data to multiple database instances in a cluster, just as you can read across Read Replicas today.

(話說我一直都誤以為 Aurora 是 R/W master...)

Anyway,這個功能不知道怎麼疊上去的... 不笑得會不會有嚴重的 distributed lock issue,反而推薦大家平常都寫到同一台 (像是 PXC 就會這樣)。

GitHub 重新定位 Redis 的功能...

GitHub Engineering 說明了他們為什麼改變 Redis 的使用情境:「Moving persistent data out of Redis」。

GitHub 裡面,Redis 有兩種不同的情境,一種叫做 transient Redis,只用做 cache:

We used it as an LRU cache to conveniently store the results of expensive computations over data originally persisted in Git repositories or MySQL. We call this transient Redis.

另外一種則是打開 persistence 功能,叫做 persistent Redis:

We also enabled persistence, which gave us durability guarantees over data that was not stored anywhere else. We used it to store a wide range of values: from sparse data with high read/write ratios, like configuration settings, counters, or quality metrics, to very dynamic information powering core features like spam analysis. We call this persistent Redis.

這邊講的是 persistent Redis 被換成用 MySQL (InnoDB) 儲存:

Recently we made the decision to disable persistence in Redis and stop using it as a source of truth for our data. The main motivations behind this choice were to:

  • Reduce the operational cost of our persistence infrastructure by removing some of its complexity.
  • Take advantage of our expertise operating MySQL.
  • Gain some extra performance, by eliminating the I/O latency during the process of writing big changes on the server state to disk.

For the majority of callsites, we replaced persistent Redis with GitHub::KV, a MySQL key/value store of our own built atop InnoDB, with features like key expiration. We were able to use GitHub::KV almost identically as we used Redis: from trending repositories and users for the explore page, to rate limiting to spammy user detection.

後面講了不少轉換的過程 (還包含了某些功能的改寫),但沒有講的太清楚為什麼不繼續使用 Redis。

目前只能就提到的三點問題來看,persistent 的 i/o 成本可能太高?而且難以再壓榨效能出來?而相反的,InnoDB 已經花了很多力氣在上面,直接拿來用反而可以解決問題?

不過看得出來這個轉換還是花了不少力氣,看得出來有些 application 使用 Redis 的模式不能直接搬到 InnoDB 上,花了時間改寫...

Percona 宣佈支援 MyRocks (MySQL 上的 RocksDB engine)

RocksDBFacebookGoogle 放出的 LevelDB 改出來,然後被更多人接受並且投注資源的 library... (看兩邊的 GitHub 應該就會有感覺了)

而 Facebook 的人在改進後又花了不少力氣 porting 到 MySQL 上...

之前 Twitter 上就有看到不少消息,這次算是在 Percona 官方的 blog 上正式公佈要支援 MyRocks 的消息:「Announcing MyRocks in Percona Server for MySQL」。

依照目前的計畫次在明年 2017 的 Q1 放出 experimental build,依照 Percona 的品質慣例,應該是可以拿來在測試環境下跑的順順的 (在還沒有 heavy loading 的前提下):

We will provide the experimental builds of MyRocks in Percona Server in Q1 2017, and we encourage you to start testing and experimenting so we can quickly release a solid GA version.

文章下面的 comment 剛好有人提到 Percona 另外一個產品線 TokuDB,這兩個產品線重複的問題:

MyRocks seems pretty similar to TokuDB. They are both write-optimized. MyRocks uses LSM tree while TokuDB uses fractal tree.

How do the 2 compare? Which one would you recommend using?

之前被 Percona 買下的 TokuDB 跟 Facebook 所發展出來的 MyRocks 的產品重複性頗高 (都是為了寫入的部分最佳化)。應該還是因為 fractal treeLSM tree 成熟度造成的效能差異還是太明顯吧 (當然另外也跟後面公司投入的資源有關),讓 Percona 決定還是要支援 MyRocks,而不是全力推動自家買下的 TokuDB... (唔,變成阿斗了?)

不知道成熟後有沒有機會變成 InnoDB replacement...

Firefox 的大量寫入對 SSD 的影響

在「Firefox is eating your SSD - here is how to fix it」這邊講到 Firefox 寫入對 SSD 的影響,先引用文章裡的解法:

After some digging, I found out that this behavior is controlled by a parameter that you can access through typing “about:config” in the address bar. This parameter is called: —browser.sessionstore.interval

預設是 15 秒一次,作者改成 30 分鐘一次,因此下降了大約 5 倍 (應該可以解讀成 1/6?):

It is set to 15 seconds by default. In my case, I reset it to a more sane (at least for me) 30 minutes. Since then, I’m only seeing about 2GB written to disk when my workstation is left idle, which still feels like a lot but is 5 times less than before.

據文章後面的 update 說明,Google Chrome 也有類似的情況,不過暫時沒給解法...

Archives