I'm compressing ~1.3 GB folders each filled with 1440 JSON files and find that there's a 15-fold difference between using the tar command on macOS or Raspbian 10 (Buster) and using Python's built-in tarfile library.
看到 1440 個檔案應該會有直覺是一分鐘一個檔案,跑一天的量...
隔天他把原因找出來了,在裝了 GNU Tar 並且加上 --sort='name' 參數後,壓出來的大小就跟 Python 的 tarfile 差不多了:
Ok, I think I found the issue: BSD tar and GNU tar without any sort options put the files in the archive in an undefined order.
After installing GNU tar on my Mac with:
brew install gnu-tar
And then tarring the same folder, but with the --sort option:
My JSON files contain measurements from hundreds of sensors. Every minute I read out all sensors, but only a few of these sensors have a different value from minute to minute.
By sorting the files by name (which has the creation unixtime at the beginning of it), two subsequent files have very little different characters between them. Apparently this is very favourable for the compression efficiency.
官方有提到,低於 250,000 drive days 以下的數據僅供參考,因為資料量太少,在統計上無法提供結論:
For drives which have less than 250,000 drive days, any conclusions about drive failure rates are not justified. There is not enough data over the year-long period to reach any conclusions. We present the models with less than 250,000 drive days for completeness only.
The answer: It was a group effort. To start, the older drives: 4TB, 6TB, 8TB, and 10TB drives as a group were significantly better in 2020, decreasing from a 1.35% AFR in 2019 to a 0.96% AFR in 2020. At the other end of the size spectrum, we added over 30,000 larger drives: 14TB, 16TB, and 18TB, which as a group recorded an AFR of 0.89% for 2020. Finally, the 12TB drives as a group had a 2020 AFR of 0.98%. In other words, whether a drive was old or new, or big or small, they performed well in our environment in 2020.
Today we are announcing that beginning immediately, Backblaze B2 customers will be able to download data stored in B2 to Cloudflare for zero transfer fees.
Amazon S3 now provides increased performance to support up to 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge. Each S3 prefix can support these request rates, making it simple to increase performance exponentially.
However, if you expect a rapid increase in the request rate for a bucket to more than 300 PUT/LIST/DELETE requests per second or more than 800 GET requests per second, we recommend that you open a support case to prepare for the workload and avoid any temporary limits on your request rate.
Stripe’s rate limiters are built on top of Redis, and until recently, they ran on a single very hot instance of Redis. The server had followers in place for failover, but at any given time, one node was handling every operation.
We eventually solved it by migrating to a 10-node Redis Cluster.
Consider the problem of trying to find a near-optimal version of a promotional message such as this one, which has 5 variable parts and 48 different combinations in total.
Based on the Amazon success rate and traffic size, the authors calculated it would take 66 days to conduct a 48 treatment randomized experiment. Often this isn’t practical.
也就是開頭提到的,如何一個禮拜就提昇 21% conversion rate:
Aka, “How Amazon improved conversion by 21% in a single week!”
Yesterday we saw the hard-won wisdom on display in ‘seven myths‘ recommending that experiments be kept simple and only test one thing at a time, otherwise interpreting the results can get really complicated.
Score = Lower bound of Wilson score confidence interval for a Bernoulli parameter
但實際的運算其實沒那麼複雜,像是 Ruby 的程式碼可以看出大多都是系統內的運算就可以算出來。其中的 z 在大多數的情況下是常數。
require 'statistics2'
def ci_lower_bound(pos, n, confidence)
if n == 0
return 0
end
z = Statistics2.pnormaldist(1-(1-confidence)/2)
phat = 1.0*pos/n
(phat + z*z/(2*n) - z * Math.sqrt((phat*(1-phat)+z*z/(4*n))/n))/(1+z*z/n)
end
The z-score in this function never changes, so if you don't have a statistics package handy or if performance is an issue you can always hard-code a value here for z. (Use 1.96 for a confidence level of 0.95.)