Home » Posts tagged "cache" (Page 3)

MySQL 總算要拔掉 mysql_query_cache 了

半官方的 MySQL blog 上宣佈了拔掉 mysql_query_cache 的計畫:「MySQL 8.0: Retiring Support for the Query Cache」。

作者開頭引用了 ProxySQL 的人對 MySQL Query Cache 的說明:

Although MySQL Query Cache was meant to improve performance, it has serious scalability issues and it can easily become a severe bottleneck.

主要問題在於 MySQL Query Cache 在多 CPU 環境下很難 scale,很容易造成一堆 thread 在搶 lock。而且作者也同意 ProxySQL 的說法,將 cache 放到 client 的效能比較好:

We also agree with Rene’s conclusion, that caching provides the greatest benefit when it is moved closer to the client:

可以看到 Query Cache 在複雜的環境下對效能極傷。而之前也提到過類似的事情了:「Percona 對 mysql_query_cache 的測試 (以 Magento 為例)」、「關閉 MySQL 的 Query Cache」。

一般如果要 cache 的話,透過 InnoDB 裡良好的 index 應該還可以撐不少量起來。

Reddit 在處理 Page View 的方式

Reddit 說明了他們如何處理 pageview:「View Counting at Reddit」。

以 Reddit 的規模有提到兩個重點,第一個在善用 RedisHyperLogLog 這個資料結構,當量大的時候其實可以允許有微小的誤差:

The amount of memory varies per implementation, but in the case of this implementation, we could count over 1 million IDs using just 12 kilobytes of space, which would be 0.15% of the original space usage!

維基百科上有說明當資料量在 109 這個等級時,用 1.5KB 的記憶體只有 2% 的誤差值:

The HyperLogLog algorithm is able to estimate cardinalities of > 109 with a typical error rate of 2%, using 1.5 kB of memory.

第二個則是寫入允許短時間的誤差 (pageview 不會即時反應),透過批次處理降低對 Cassandra cluster 的負荷:

Writes to Cassandra are batched in 10-second groups per post in order to avoid overloading the cluster.

可以注意到把 Redis 當作 cache 層而非 storage 層。

主要原因應該跟 Redis 定位是 data structure server 而非 data structure storage 有關 (可以從對 Durability 的作法看出來),而使用 Cassandra 存 key-value 非常容易 scale,但讀取很慢。剛好兩個相輔相成。

超越線性成長的資料庫架構

標題取自 Percona 的「Better Than Linear Scaling」。

其實是因為機器數量增加,而且有妥善規劃,使得 cache 的 hit rate 上升而讓整體效率變好 (也就是 1 + 1 > 2)。

不只在 database 上會發生,在其他系統上其實會有類似的情況,剛好看到覺得很懷念 XD

Amazon DynamoDB Accelerator (DAX)

DynamoDB 推出的新架構,在系統上幫忙處理 cache:「Amazon DynamoDB Accelerator (DAX) – In-Memory Caching for Read-Intensive Workloads」。

DAX 跟現有的 DynamoDB API 相容:

DAX is a fully managed caching service that sits (logically) in front of your DynamoDB tables. It operates in write-through mode, and is API-compatible with DynamoDB.

因為 cache 的緣故,會是 eventually-consistent 架構:

Responses are returned from the cache in microseconds, making DAX a great fit for eventually-consistent read-intensive workloads.

然後是 r3 系列的機器組成的,限制在十台 (冒出大大的問號):

Each DAX cluster can contain 1 to 10 nodes; you can add nodes in order to increase overall read throughput. The cache size (also known as the working set) is based on the node size (dax.r3.large to dax.r3.8xlarge) that you choose when you create the cluster. Clusters run within a VPC, with nodes spread across Availability Zones.

不是很清楚這樣的好處 (比起自己用 memcached 或是其他類似的 cache 架構),也許過幾天想通了會開竅... :o

eBay 把 MongoDB 當 cache layer 的用法...

在「How eBay’s Shopping Cart used compression techniques to solve network I/O bottlenecks」這邊 eBay 描述了他們怎麼解決在 MongoDB 上遇到的問題,不過我看的是他們怎麼用 MongoDB,而不是這次解決的問題:

It’s easier to think of the MongoDB layer as a “cache” and the Oracle store as the persistent copy. If there’s a cache miss (that is, missing data in MongoDB), the services fall back to recover the data from Oracle and make further downstream calls to recompute the cart.

把 MongoDB 當作 cache layer,當 cache miss 的時候還是會回去底層的 Oracle 撈資料計算,這用法頗有趣的...

不拿 memcached 出來用的原因不知道是為什麼,是要找個有 HA 方案的 cache layer 嗎?還是有針對 JSON document 做判斷操作?

Facebook 與 Google Chrome 以及 Firefox 的人合作降低 Reload 使用的資源

Facebook 花了不少時間對付 reload 這件事情:「This browser tweak saved 60% of requests to Facebook」。

Facebook 的人發現有大量對靜態資源的 request 都是 304 (not modified) 回應:

In 2014 we found that 60% of requests for static resources resulted in a 304. Since content addressed URLs never change, this means there was an opportunity to optimize away 60% of static resource requests.

Google Chrome 很明顯偏高:

於是他們找出原因後,發現 Google Chrome 只要 POST 後的頁面都會 revalidate:

A piece of code in Chrome hinted at the answer to our question. This line of code listed a few reasons, including reload, for why Chrome might ask to revalidate resources on a page. For example, we found that Chrome would revalidate all resources on pages that were loaded from making a POST request.

然後在討論後認為這個行為不必要,就修掉了,可以看到降了非常多:

We worked with Chrome product managers and engineers and determined that this behavior was unique to Chrome and unnecessary. After fixing this, Chrome went from having 63% of its requests being conditional to 24% of them being conditional.

但還是很明顯比起其他瀏覽器偏高不少,在追問題後發現當輸入同樣的 url 時 (像是 Ctrl-L 或是 Cmd-L 然後直接按 enter),Google Chrome 會當作 reload:

The fact that the percentage of conditional requests from Chrome was still higher than other browsers seemed to indicate that we still had some opportunity here. We started looking into reloads and discovered that Chrome was treating same URL navigations as reloads while other browsers weren't.

不過這次推出修正後發現沒有大改變:(拿 production 測試 XDDD)

Chrome fixed the same URL behavior, but we didn't see a huge metric change. We began to discuss changing the behavior of the reload button with the Chrome team.

後來是針對 reload button 的行為修改,max-age 很長的就不 reload,比較短的就 reload。算是一種 workaround:

There was some debate about what to do, and we proposed a compromise where resources with a long max-age would never get revalidated, but that for resources with a shorter max-age the old behavior would apply. The Chrome team thought about this and decided to apply the change for all cached resources, not just the long-lived ones.

Google 也發了一篇說明這個新功能:「Reload, reloaded: faster and leaner page reloads」。

當 Facebook 的人找 Firefox 的人時,Firefox 決定另外定義哪些東西在 reload 時不需要 revalidate,而不像 Google Chrome 的 workaround:

Firefox chose to implement this directive in the form of a cache-control: immutable header.

Firefox 的人也寫了一篇「Using Immutable Caching To Speed Up The Web」解釋這個新功能。

所以之後規劃前後端的架構時又有東西要考慮進去...

GitHub 重新定位 Redis 的功能...

GitHub Engineering 說明了他們為什麼改變 Redis 的使用情境:「Moving persistent data out of Redis」。

GitHub 裡面,Redis 有兩種不同的情境,一種叫做 transient Redis,只用做 cache:

We used it as an LRU cache to conveniently store the results of expensive computations over data originally persisted in Git repositories or MySQL. We call this transient Redis.

另外一種則是打開 persistence 功能,叫做 persistent Redis:

We also enabled persistence, which gave us durability guarantees over data that was not stored anywhere else. We used it to store a wide range of values: from sparse data with high read/write ratios, like configuration settings, counters, or quality metrics, to very dynamic information powering core features like spam analysis. We call this persistent Redis.

這邊講的是 persistent Redis 被換成用 MySQL (InnoDB) 儲存:

Recently we made the decision to disable persistence in Redis and stop using it as a source of truth for our data. The main motivations behind this choice were to:

  • Reduce the operational cost of our persistence infrastructure by removing some of its complexity.
  • Take advantage of our expertise operating MySQL.
  • Gain some extra performance, by eliminating the I/O latency during the process of writing big changes on the server state to disk.

For the majority of callsites, we replaced persistent Redis with GitHub::KV, a MySQL key/value store of our own built atop InnoDB, with features like key expiration. We were able to use GitHub::KV almost identically as we used Redis: from trending repositories and users for the explore page, to rate limiting to spammy user detection.

後面講了不少轉換的過程 (還包含了某些功能的改寫),但沒有講的太清楚為什麼不繼續使用 Redis。

目前只能就提到的三點問題來看,persistent 的 i/o 成本可能太高?而且難以再壓榨效能出來?而相反的,InnoDB 已經花了很多力氣在上面,直接拿來用反而可以解決問題?

不過看得出來這個轉換還是花了不少力氣,看得出來有些 application 使用 Redis 的模式不能直接搬到 InnoDB 上,花了時間改寫...

Ruby 2.4 中 Hash Table 的效能改善

前幾天 Ruby 推出了 2.4.0 (Ruby 2.4.0 Released),其中特別被拿出來提的:「Introduce hash table improvement (by Vladimir Makarov)」。

討論串很長而且歷時很久,但可以看出來方向是提高 CPU cache 效率:

Modern processors have several levels of cache. Usually,the CPU reads one or a few lines of the cache from memory (or another level of cache). So CPU is much faster at reading data stored close to each other. The current implementation of Ruby hash tables does not fit well to modern processor cache organization, which requires better data locality for faster program speed.

中間還有拿 Redmine 當作測試項目... XD

CloudFront 的 Regional Edge Caches

Amazon CloudFront 前陣子宣佈了 two-tier 架構:「Announcing Regional Edge Caches for Amazon CloudFront」。

一般的 CDN 是 edge 收到後就打到 origin,這會使得 origin 的量比較大。而 two-tier 架構則是在中間疊一層降低對 origin 的量。這種架構對於直播時的 pattern 幫助很大:由於量很大,會需要用大量的 edge server 支撐,而 edge server 一多就對 origin 產生巨大的壓力。

一般直播 95% 的 hitrate 表示外面 20Gbps 的流量就會造成 origin 1Gbps 的流量,通常用 two-tier 可以拉到 98%+ (CDN vendor 有調整過可以到 99%+)。

這種技術在 Akamai 叫 Tiered Distribution,在 EdgeCast 叫 SuperPoP,而現在 CloudFront 也推出了,叫做 Regional Edge Cache:

The nine new Regional Edge Cache locations are in Northern Virginia, Oregon, São Paulo, Frankfurt, Singapore, Seoul, Tokyo, Mumbai, and Sydney.

edge 會先到這幾個 regional edge 再到 origin:

These locations sit between your origin webserver and the 68 global edge locations that serve traffic directly to your viewers.

然後預設都開啟了,沒有額外費用:

Regional Edge Caches are turned on by default for your CloudFront distributions; you do not need to make any changes to your distributions to take advantage of this feature. There are also no additional charges to use this feature.

不過這個架構對於 latency 應該不會太好,沒得關閉有點奇怪...

Archives