用 NN 演算法重製 Full HD 版的 Star Trek: DS9

看到「Remastering Star Trek: Deep Space Nine With Machine Learning」這篇,裡面用了類神經網路演算法,將本來只有 480p (SD) 的 Star Trek: DS9 升到 1080p (Full HD) 的版本,而且看起來效果還不錯...

意外的看到有人拿 Star Trek 的材料來玩... 依照作者的說明,DS9 一直沒有 Full HD 版的其中一個原因反而是因為「數位化」了。使用類比膠卷的母帶可以透過更高規格的重新掃描而得到高畫質版本,但 DS9 的母帶似乎已經是數位版了,所以反而造成無法透過重新掃描的方式取得 Full HD 版本:

While you can rescan analog film at a higher resolution, video is digital and can't be rescanned. This makes it much costlier to remaster this TV show, which is one of the reasons why it hasn't happened.

現有的 upscale 技術主要都還是以圖片為主,所以作者本來以為對於動態畫面的處理會遇到問題,但蠻意外的超出預期,從影片可以看出來:

看起來之後的 remaster 版本有可能可以靠這個方法先做初步,然後再讓人進去修?

LinkedIn 用機器學習提供雇主可能的職缺對象

先前看到「Learning Hiring Preferences: The AI Behind LinkedIn Jobs」這篇,LinkedIn 用機器學習提供雇主可能的對象。

依照官方的說法,這次提到的改進是透過雇主的行為調整推薦。當雇主對某個人有興趣的時候,LinkedIn 就會調整演算法去配合雇主有興趣的條件:

Based on how you interact with candidates, our algorithm learns your preferences and delivers increasingly relevant candidates across the Jobs product. If you’re consistently interested in candidates who are, say, accountants with leadership skills, or project managers who are adept at social media, we’ll send you more of those. And this all happens online in real time so that your feedback is taken instantly into account.

透過模擬 20% 的加成:

This new algorithm, which is used throughout the Jobs platform, performs nearly 20% better than the previous version in generating recommendations when we simulate our members' past hiring activity.

在 social network 這種演算法其實就是同溫層 (Echo chamber、Filter bubble),在 LinkedIn 這樣的行為不知道會不會牽扯到 Discrimination 的議題...

AWS 的推薦演算法服務:Amazon Personalize

AWS 把推薦演算法包成服務拿來來賣,叫做 Amazon Personalize:「Amazon Personalize – Real-Time Personalization and Recommendation for Everyone」。

把後面的演算法隱藏起來,只要給使用者的評價資料就可以了,像是文章裡的範例:

userId,movieId,rating,timestamp
1,2,3.5,1112486027
1,29,3.5,1112484676
1,32,3.5,1112484819
1,47,3.5,1112484727
1,50,3.5,1112484580

可以看出來這個使用者對 2,29,32,47,50 這些 movieId 在不同的時間點都給了 3.5 分的評分。

然後經過一連串的 API 操作 (有些參數可以調整,但主要是叫 AWS 運算,並且建立 real-time 的服務),就可以看到推薦哪些其他的 item 了:

$ aws personalize-rec get-recommendations --campaign-arn $CAMPAIGN_ARN --user-id $USER_ID --query "itemList[*].itemId"
["1210", "260", "2571", "110", "296", "1193", ...]

而從 Pricing 的頁面可以看到支援 real-time data 與 batch data:

DATA INGESTION
You are charged per GB of data uploaded to Amazon Personalize. This includes real-time data streamed to Amazon Personalize and batch data uploaded via Amazon S3.

這其實是很多網站都很需要的功能...

AWS 新推出的 Amazon Elastic Inference:GPU 出租方案

AWS 推出了 Amazon Elastic Inference,可以讓你選擇 GPU 的量掛進 EC2 instance:「Amazon Elastic Inference – GPU-Powered Deep Learning Inference Acceleration」。

第一眼看到的時候在想這不是之前出過了嗎... 後來搜尋發現應該是針對圖形運算與 machine learning 的應用拆開使用不同的硬體?

所以在前陣子 AWS 公告將 Amazon EC2 Elastic GPUs 改名為 Amazon Elastic Graphics:「Amazon EC2 Elastic GPUs is now Amazon Elastic Graphics」。

舊的 Amazon EC2 Elastic GPUs (Amazon Elastic Graphics) 應該是針對圖形應用設計,而新的 Amazon Elastic Inference 則是針對 machine learning 設計。

EC2 推出用 machine learning 協助 auto scaling 控制的功能...

AWSEC2 上推出了用 machine learning 協助 auto scaling 控制的功能:「New – Predictive Scaling for EC2, Powered by Machine Learning」。

最少給他一天的資料 (然後他會每天重新分析一次),接著會預測接下來的 48 小時的使用行為:

The model needs at least one day’s of historical data to start making predictions; it is re-evaluated every 24 hours to create a forecast for the next 48 hours.

所以是個學 pattern 然後預先開好機制等著的概念...

透過預測增加服務穩定性的概念... 如果本來就跑得好好的 (也就是靠 resource-based metric 觸發機器數量的方式跑得很好),就未必需要考慮這個方案了。

目前支援的區域中,東京不在列表內,不過其他常見的區域都支援了:

Predictive scaling is available now and you can starting using it today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore) Regions.

GitHub 上的軟體授權分佈

雖然 GitHub 有提供 license 相關的 API 可以查,但因為準確度不高 (只要稍微改到,GitHub 就無法偵測到正確的 license),所以有人決定用 machine learning 的方式另外分析:「Detecting licenses in code with Go and ML」。當然這邊是分析公開的部份:

最大包的是 MIT License,次之是 Apache-2.0 (問號那群先不管),再來是 GPL 家族的各版本。沒有太特別的意外發生...

透過類神經網路,直接把圖變成 HTML

看到 GitHub 上的「emilwallner/Screenshot-to-code-in-Keras」這個專案,直接把圖片轉成 HTML。介紹的文章則是「Turning Design Mockups Into Code With Deep Learning」。

有點像是「將 Sketch 輸出成 iOS/Android 的程式碼」與「透過 NN (類神經網路) 訓練好的系統,直接把圖片轉成程式碼」(後面這篇剛好在介紹文章裡也有提到)。

愈來愈有 NN 在逐步取代人類工作的感覺了...

AWS 提供 Windows 上的 Deep Learning AMI

有一些 Windows 上的東西就可以直接開起來跑了:「Announcing New AWS Deep Learning AMI for Microsoft Windows」。

目前支援 2012 R2 與 2016:

Amazon Web Services now offers an AWS Deep Learning AMI for Microsoft Windows Server 2012 R2 and 2016.

然後 driver 與常用的東西都包進去了:

The AMIs also include popular deep learning frameworks such as Apache MXNet, Caffe and Tensorflow, as well as packages that enable easy integration with AWS, including launch configuration tools and many popular AWS libraries and tools. The AMIs come prepackaged with Nvidia CUDA 9, cuDNN 7, and Nvidia 385.54 drivers, and contain the Anaconda platform (supports Python versions 2.7 and 3.5).

機器學習與情色產業的問題

Bruce Schneier 提到了最近幾個剛好相關的議題,關於機器學習在情色產業使用時遇到的隱私議題:「Technology to Out Sex Workers」。

第一個提到的是 PornHub 用機器學習辨識演員以及各種「其他資訊」,這邊引用的報導是 TechCrunch 的「PornHub uses computer vision to ID actors, acts in its videos」:

PornHub is using machine learning algorithms to identify actors in different videos, so as to better index them.

The computer vision system can identify specific actors in scenes and even identifies various positions and… attributes.

第二個提到的是花名與真實身份連在一起的問題:

People are worried that it can really identify them, by linking their stage names to their real names.

最後是提到 Facebook 已經有能力這樣做,而且已經發生了:

Facebook somehow managed to link a sex worker's clients under her fake name to her real profile.

Her sex-work identity is not on the social network at all; for it, she uses a different email address, a different phone number, and a different name. Yet earlier this year, looking at Facebook’s “People You May Know” recommendations, Leila (a name I’m using using in place of either of the names she uses) was shocked to see some of her regular sex-work clients.

這個議題與 Mass surveillance 有點像...。

星海爭霸 II 官方的 AI Workshop

Blizzard 公佈了在十一月的月初將會舉辦星海二的 AI Workshop:「Announcing the StarCraft II AI Workshop」。

On November 3 and 4, Blizzard and DeepMind will co-host the StarCraft II AI Workshop at the Hilton Anaheim hotel, next to the Anaheim Convention Center.

官方 (包括 DeepMind 團隊) 也會針對 SC2LE (Starcraft II Learning Environment) 與 SC2API (StarCraft II API) 提供交流:

Engineers and researchers from Blizzard and DeepMind will also be on-hand to meet with attendees and answers questions about the SC2LE and SC2API.

然後時間會跟 BlizzCon 2017 重疊 (目前看起來是卡到最後兩天),票是不能通用的:

While this event takes place during BlizzCon 2017, it is considered a separate event and is not part of the official BlizzCon program – therefore BlizzCon badges will not grant access to the AI workshop. However, we will be providing a limited pool of shareable BlizzCon badges that attendees of the AI workshop can use to check out BlizzCon and catch the StarCraft II Global Finals for inspiration on how to build superior AIs!

接下來應該會有不少消息出來... DeepMind 團隊的開發進度有可以跟頂尖選手競賽嗎?