即時將動畫 Upscale 到 4K 畫質的演算法

看到「Anime4K」這個專案:

Anime4K is a state-of-the-art*, open-source, high-quality real-time anime upscaling algorithm that can be implemented in any programming language.

State of the art* as of August 2019 in the real time anime upscaling category, the fastest at acheiving reasonable quality. We do not claim this is a superior quality general purpose SISR algorithm compared to machine learning approaches.

他們提供的數據顯示 1080p -> 2160p (4K) 只要 3ms,對於 60fps 來說是相當足夠,而品質看起來也還不錯。

其中一個蠻有趣的問答是 1080p -> 2160p 反而比 480p -> 720p 簡單,因為 1080p 裡面因為有更多資料量,所以處理起來比較簡單:

Why not do PSNR/SSIM on 480p->720p upscaling
Story Time

Comparing PSNR/SSIM on 480p->720p upscales does not prove and is not a good indicator of 1080p->2160p upscaling quality. (Eg. poor performance of waifu2x on 1080p anime) 480p anime images have a lot of high frequency information (lines might be thinner than 1 pixel), while 1080p anime images have a lot of redundant information. 1080p->2160p upscaling on anime is thus objectively easier than 480p->720p.

拿 Git 記錄分析知名開發者的活動時間...

看到拿 git log 分析知名開發者的活動時間:「At what time of day do famous programmers work?」。

從列出來的人可以看出 Chris Lattner (LLVMSwift) 是個夜貓子,其他人大多都是正常的作息... 也看得出來有些人堅持睡覺時間不碰這些東西 XD

Facebook 推出了 Hermes,為了 React Native 而生的 JS Engine

Facebook 提供了一個對 React Native 最佳化的 JS engine:「Hermes: An open source JavaScript engine optimized for mobile apps, starting with React Native」。

裡面有提到兩個比較重要的的部份是 No JIT 與 Garbage collector strategy,針對行動裝置的特性而設計:避免 JIT 產生的 overhead,以及降低記憶體使用量。

官方給的改善主要也都是偏這兩塊:

不過沒有提到 CPU usage 會上升多少,只是帶過去:

Notably, our primary metrics are relatively insensitive to the engine’s CPU usage when executing JavaScript code.

對於 Facebook 也許是可以接受的數量,但對於其他人就沒概念了... 要入坑的人自己衡量這部份的風險 XD

Cloudflare 弄了 time.cloudflare.com,不過 latency 沒有很好...

Cloudflare 提供的 NTP service,使用 time.cloudflare.com:「Introducing time.cloudflare.com」。

官方是號稱所有的機房 (應該就包括台北的點):

Now, anyone can get time securely from all our datacenters in 180 cities around the world.

但在 HiNet 下測試可以看到是從東京的點服務:

  2.|-- snuh-3302.hinet.net        0.0%    10    9.5  10.5   9.5  12.7   0.8
  3.|-- tpdt-3022.hinet.net        0.0%    10   11.1  10.8  10.1  11.8   0.3
  4.|-- r4103-s2.tp.hinet.net      0.0%    10   27.2  11.9   9.3  27.2   5.4
  5.|-- r4003-s2.tp.hinet.net      0.0%    10   11.3  10.7   9.5  11.6   0.3
  6.|-- xe-0-0-0-3-6.r02.osakjp02  0.0%    10   47.9  48.7  47.6  49.9   0.6
  7.|-- ae-2.a00.osakjp02.jp.bb.g  0.0%    10   44.9  47.3  42.8  66.4   6.9
  8.|-- ae-20.r03.osakjp02.jp.bb.  0.0%    10   43.7  43.2  42.5  44.7   0.3
  9.|-- 61.120.144.46              0.0%    10   69.6  52.4  48.0  69.6   6.8
 10.|-- 162.159.200.123            0.0%    10   48.3  48.8  47.9  49.3   0.0

如果從 APOL (有線電視) 的點則是透過台灣的機房連線:

  3.|-- 10.251.11.6                0.0%    10   19.8  29.6  11.1  80.2  21.7
  4.|-- 10.251.231.5               0.0%    10   25.3  24.9  16.4  43.6   8.0
  5.|-- 10.251.231.1               0.0%    10    8.7   6.3   3.4   9.1   1.9
  6.|-- 10.251.230.34              0.0%    10    4.6   7.4   2.6  14.5   3.2
  7.|-- 10.251.230.29              0.0%    10    3.0   6.6   3.0  11.9   2.8
  8.|-- 202-178-245-162.cm.static  0.0%    10    4.8   7.3   4.8   9.9   1.4
  9.|-- 192.168.100.14             0.0%    10    7.3   8.0   5.4  11.1   2.0
 10.|-- 192.168.100.9              0.0%    10    8.2   8.0   4.9  11.1   1.7
 11.|-- 203-79-250-65.static.apol  0.0%    10    9.1   9.0   6.4  11.3   1.5
 12.|-- 211.76.96.92               0.0%    10    7.8   8.1   4.3  15.8   3.3
 13.|-- 39-222-163-203-static.tpi  0.0%    10    9.9   9.1   6.7  12.4   1.8
 14.|-- 162.159.200.1              0.0%    10    5.0   7.2   5.0   9.7   1.4

看起來又是有東西沒搞定了...

AWS 推出更便宜的儲存方案 Glacier Deep Archive

AWS 推出的這個方案價錢又更低了:「New Amazon S3 Storage Class – Glacier Deep Archive」。

在這之前在 us-east-1S3 最低的方案是 Glacier Storage,單價是 USD$0.004/GB (也就是 $4/TB)。

而這次推出的 Glacier Deep Archive Storage 在同一區則是直接到 USD$0.00099/GB ($0.99/TB),大約是 1/4 的價錢。

Glacier Deep Archive 在取得時 first byte 的保證時間是 12 小時,另外最低消費是 180 天:

Retrieval time within 12 hours

先前就有的 Glacier Storage 則是可以在取用時設定取得的 pattern (會影響 first byte 的時間),而最低消費是 90 天:

Configurable retrieval times, from minutes to hours

Pricing for each of these metrics is determined by the speed at which data is requested based on three options. "Expedited" queries <250 MB are typically returned in 1-5 minutes. "Standard" queries are typically returned in 3-5 hours. "Bulk" queries are typically returned in 5-12 hours.

多一個更便宜的選擇可以用。

對 UI Redesign 的砲火

看到「UI redesigns are mostly a waste of time」這篇,標題說明了他想講的重點... 把粗體字的部份標出來:

Just stop redesigning the UI all the time.

People use applications because of their purpose, not because it is pleasing to the eye.

The only time a UI should be updated is if it impacts the ability of a user to actually use your application.

Please stop changing everything, stop making it more work for us to relearn the applications we like to use.

Does it really add up, or are you just giving designers busywork that doesn't amount to anything in the end?

Sometimes, if we are able to do it does not mean we should.

這讓我想到 Gmail 了...

Internet 上的 3rd party js 的情況

Twitter 上看到這則:

裡面提到了「patrickhulce/third-party-web」的分析 (作者是從 HTTP Archive 的資料分析),裡面依照不同種類的 3rd party js (像是 ad,或是 social element,或是分析工具) 需要執行的時間,以及使用的站台數量。

Social 那邊意外看到 PIXNET 有上去,然後速度只比 Disqus 快一些,應該是沒有 optimize 的關係。

如果整體一起看的話 (總和花費時間),可以看到 Google 各項產品都在最前面,畢竟裡面每個項目都是被廣泛使用的。

LED 燈泡的壽命

在「What Happened to the 100,000-Hour LED Bulbs?」這邊看到在討論 LED 燈泡的壽命。比較感興趣的是到底業界怎麼計算 LED 燈泡的壽命。

前面提到很多因使用時間的增加而產生衰退與色偏,然後講到「壽命」的定義是,超過一半的燈泡低於 70% 的原始亮度所需要的時間:

Research says that most users won’t notice a gradual 30% drop in light levels; accordingly the industry has defined L70, the time at which the output has dropped to 70% of its initial level, as an endpoint for measuring LED bulb lifetime. Based on how it’s estimated, this measure is typically stated as B50-L70, the point at which 50% of an initial sample of bulbs will retain 70% of their rated output.

另外一個有趣的資料是,LED 燈泡故障的原因中,電源供應相關佔了最大的比率 (過半):

算是個有趣的知識...

AWS 的推薦演算法服務:Amazon Personalize

AWS 把推薦演算法包成服務拿來來賣,叫做 Amazon Personalize:「Amazon Personalize – Real-Time Personalization and Recommendation for Everyone」。

把後面的演算法隱藏起來,只要給使用者的評價資料就可以了,像是文章裡的範例:

userId,movieId,rating,timestamp
1,2,3.5,1112486027
1,29,3.5,1112484676
1,32,3.5,1112484819
1,47,3.5,1112484727
1,50,3.5,1112484580

可以看出來這個使用者對 2,29,32,47,50 這些 movieId 在不同的時間點都給了 3.5 分的評分。

然後經過一連串的 API 操作 (有些參數可以調整,但主要是叫 AWS 運算,並且建立 real-time 的服務),就可以看到推薦哪些其他的 item 了:

$ aws personalize-rec get-recommendations --campaign-arn $CAMPAIGN_ARN --user-id $USER_ID --query "itemList[*].itemId"
["1210", "260", "2571", "110", "296", "1193", ...]

而從 Pricing 的頁面可以看到支援 real-time data 與 batch data:

DATA INGESTION
You are charged per GB of data uploaded to Amazon Personalize. This includes real-time data streamed to Amazon Personalize and batch data uploaded via Amazon S3.

這其實是很多網站都很需要的功能...