Twitch 宣佈退出韓國市場

Twitch 宣佈 2024/02/27 (星期二) 退出韓國市場:「An Update on Twitch in Korea」。日期不知道是怎麼選的,可能跟某些合約有關?

Twitch 目前的公告會有繁體中文,也可以看這份:「Twitch 韓國現況更新」。

另外今天早上找了一下,Hacker News 也有討論了:「An update on Twitch in Korea (twitch.tv)」。

目前官方給出來的理由是虧本,而且找不到方法克服虧本的問題:

Ultimately, the cost to operate Twitch in Korea is prohibitively expensive and we have spent significant effort working to reduce these costs so that we could find a way for the Twitch business to remain in Korea.

這邊提到的包括了 p2p model 以及降到 720p,但即使如此網路費用 (應該就是頻寬費用) 是其他區域的十倍以上:

First, we experimented with a peer-to-peer model for source quality. Then, we adjusted source quality to a maximum of 720p. While we have lowered costs from these efforts, our network fees in Korea are still 10 times more expensive than in most other countries. Twitch has been operating in Korea at a significant loss, and unfortunately there is no pathway forward for our business to run more sustainably in that country.

Cloudflare 這邊,2016 年還叫做 CloudFlare 的時候也有抱怨過:「CloudFlare 對 HiNet 成本的抱怨 (還有其他 ISP...)」。

當年是這樣寫 HiNetKT,成本大約是歐美區的 15 倍:

Two Asian locations stand out as being especially expensive: Seoul and Taipei. In these markets, with powerful incumbents (Korea Telecom and HiNet), transit costs 15x as much as in Europe or North America, or 150 units.

而尤其是韓國的部分,政府介入讓降價的速度比全世界慢,所以時間拉長後成本相較於其他地區就貴很多:

South Korea is perhaps the only country in the world where bandwidth costs are going up. This may be driven by new regulations from the Ministry of Science, ICT and Future Planning, which mandate the commercial terms of domestic interconnection, based on predetermined “Tiers” of participating networks. This is contrary to the model in most parts of the world, where networks self-regulate, and often peer without settlement. The government even prescribes the rate at which prices should decrease per year (-7.5%), which is significantly slower than the annual drop in unit bandwidth costs elsewhere in the world. We are only able to peer 2% of our traffic in South Korea.

不過不確定現在的情況,2016 年的 CloudFlare 跟 2023 年的 Cloudflare 已經差了七年了...

Amazon Q (來猜名字的由來...)

AWS 推出了 Amazon Q,目前還在 preview 階段:「Amazon Q brings generative AI-powered assistance to IT pros and developers (preview)」。

產品本身主要就是 LLM 的應用,以現在來說沒有太特別,主要是 Hacker News 上大家在猜這個 Q 到底是取自哪裡:「Amazon Q (Preview) (amazon.com)」。

看到有猜 Q (Star Trek)Q-learning 以及 Q (James Bond)

id=38448900 這邊有人提到 Q 是 question:

from the NYTimes article: The name Q is a play on the word “question,” given the chatbot’s conversational nature, Mr. Selipsky said. It is also a play on the character Q in the James Bond novels, who makes stealthy, helpful tools, and on a powerful “Star Trek” figure, he added.

找了一下應該是「Amazon Introduces Q, an A.I. Chatbot for Companies」這篇文章,因為 paywall 的關係可能看不到全文,可以看 archive.today 這邊的 archive:「Amazon Introduces Q, an A.I. Chatbot for Companies」。

反正牽扯的到的都提一下...

歌曲與專輯名稱的各種變化

Hacker News 上看到「Horrible edge cases to consider when dealing with music (2022) (dustri.org)」這篇,講音樂產業各種奇怪種奇怪的 case,2022 年的原文在這邊:「Horrible edge cases to consider when dealing with music」,裡面有蠻大的篇幅都在講名稱的問題...。

這讓我想到這則推:

基本上會遇到各種 escaping 與 UTF-8 的處理,另外文章裡面提到會遇到超長名稱,也就是 VARCHAR(255) 的剋星... 不確定之前在 K 社大量取得授權的時候有沒有遇到這張 1999 年的專輯 (下面這是一張專輯的名字):

When the Pawn Hits the Conflicts He Thinks Like a King What He Knows Throws the Blows When He Goes to the Fight and He'll Win the Whole Thing 'Fore He Enters the Ring There's No Body to Batter When Your Mind Is Your Might So When You Go Solo, You Hold Your Own Hand and Remember That Depth Is the Greatest of Heights and If You Know Where You Stand, Then You Know Where to Land and If You Fall It Won't Matter, Cuz You'll Know That You're Right.

其他文章裡面提到的應該都算是見怪不怪的情境,像是單曲七個小時... 這種在 server side 還拼的過去,但在行動平台上面比較苦命,直接把整曲都放進 memory 裡面的話有可能會炸。

320kbps 七個小時差不多是 1GB,如果沒注意這點,直接處理 DRM 的話瞬間就會吃到 2GB RAM;如果拉到 CD quality 的資料量就更明顯了。

然後你會發現,把音樂創作人的任何資料都當作 untrusted input 的態度來設計系統 (但你不能 reject),問題通常就不會太大 XD

擋 YouTube 短影音的設定

短影音類的影片因為沒有知識量 (沒有 reference 可以確認正確性),我完全不會看... 但 YouTube 上一般的影音我會翻,所以就會冒出這個需求了。

YouTube 的短影音有幾個地方會出現 (補充一下,我這邊是用英文版介面):

  • 首頁的左邊,會有一個 Shorts 的連結可以點進 Shorts 看。
  • 首頁的推薦裡面也會有 Shorts 的 section。

這兩個情況用這兩條擋:

www.youtube.com##a[title="Shorts"]
www.youtube.com##ytd-rich-section-renderer

這邊要注意的是,後者除了擋掉 Shorts 以外,還會擋掉各種 YouTube 的推銷 (像是電影之類的),這個也是我要擋的,所以我這邊直接用了 ytd-rich-section-renderer 這個元素來擋。

再來是各種穿插在頁面裡面的 Shorts 內容,像是首頁、訂閱頁與搜尋結果頁,這些就要找出對應的元素來擋:

www.youtube.com##ytd-reel-shelf-renderer
www.youtube.com##ytd-video-renderer:has(a[href*="/shorts/"])

另外一個跟短影音無關,但還是很影響專注度的是,YouTube 的搜尋結果會給你一堆很干擾結果的推薦,像是「People also watched」、「For you」、「Previously watched」以及「From related searches」,也可以設定擋掉:

www.youtube.com##:matches-path(/results) ytd-shelf-renderer[thumbnail-style]

目前用的差不多是這些...

把 Sennheiser HD 555 升級成 HD 595 的故事

Hacker News 上看到的,只用一隻螺絲起子,就把 Sennheiser HD 555 升級成 HD 595 的方法:「sennheiser hd 555 to hd 595 mod」。

This page will show you how to turn a $199.95 (Canadian – Suggested Retail) pair of Sennheiser HD 555 headphones into a pair of Sennheiser HD 595‘s that cost $349.95. And all you need is a screwdriver.

兩者的差異只在 HD 555 多了一片泡綿 (foam),把他拆出來就好了:

Aside from the aesthetic differences, the only physical difference was an additional piece of foam inside the cheaper HD555 headphones, blocking about 50% of the outside-facing vents. Since both the HD 555 and HD 595 are designed to be “open” headphones, reducing the vent with this foam would alter the frequency response slightly. So to save yourself $150, open your HD 555’s up and remove the foam. Done.

作者說是注意的到的差別:

Yes. The actual sound difference is very slight, but it is noticeable.

在 Hacker News 上的討論「Sennheiser HD 555 to HD 595 Mod (mikebeauchamp.com)」裡面有在猜什麼原因,有可能是硬拉產品線,也有可能是將次級品改裝,但看起來兩個機體本身是相同的沒錯...

不過這兩隻都是老機了,看起來現在沒有再繼續生產。

美國有一半買黑膠的人沒有黑膠唱盤

Hacker News 上看到討論美國很多人買黑膠卻沒有黑膠唱盤:「Half of vinyl buyers in the U.S. don’t have a record player: study (consequence.net)」,原文在「Half of Vinyl Buyers in the US Don’t Have a Record Player, New Study Shows」這。

其實現代還蠻多人買黑膠不是要聽的,有很多其他用途啊,像是可以當作大型簽名板 XD

我都跟朋友說買黑膠是買一張小海報,又好保存...

美國 FTC 提案要阻擋退訂的 Dark Pattern

2021 年的時候寫過「最近很熱鬧的 New York Times 退訂截圖」這篇,在講紐約時報在退訂這塊的 dark pattern,這個方式後來被許多報社的網路服務使用 (像是 WSJ)。

後來加州政府通過法律阻擋這樣的 dark pattern,所以就有 Reddit 上面這樣的討論,教大家直接把 billing address 改到加州後就可以網路上退訂:「WSJ Subscription policy makes it easy to subscribe (online), but hard to unsubscribe (via phone).」。

現在看起來 FTC 打算推動變成全國性的法案,而且不只是網路服務,也包括了像是健身房與第四台的服務都必須提供對稱的方法 (訂閱與退訂):「The FTC wants to ban those tough-to-cancel gym and cable subscriptions」。

來繼續追進度,看看什麼時候通過...

KataGo 1.12.0 與 UEC 杯用的 model:b18c384nbt-uec.bin.gz

剛剛看到 KataGo 出了 1.12.0,同時也放出了在 2022 年十一月 UEC 比賽時用的 model:「New Neural Net Architecture!」。

1.12.0 比較特別的新的類神經網路架構:

This version of KataGo adds support for a new and improved neural net architecture!

這個新的架構以及其他的改善讓訓練的速度改善:

The new neural nets use a new nested residual bottleneck structure, along with other major improvements in training. They train faster than KataGo's old nets and learn more effectively.

另外一個是他把 UEC 比賽時用的 model 放出來了,很特別的是採用 b18c384,而 KataGo Distributed Training 這邊目前主要是 b40c256 與 b60c320,看起來是為了比賽而一次性訓練出來的。

依照他的說法這個 b18c384 版本跟目前訓練網站上的 b60c320 有差不多強度,但計算速度會比 b60c320 快不少,甚至在一些機器上會跟 b40c256 差不多快:

Attached to this release is a one-off net b18c384nbt-uec.bin.gz that was trained for a tournament in 2022, which should be of similar strength to the 60-block nets on http://katagotraining.org/, but on many machines will run much faster, on some machines between 40-block and 60-block speed, but on some machines even as fast as or faster than 40-block.

另外一個大改變是他把訓練工具從 TensowFlow 跳槽到 PyTorch

The training code has been all rewritten to use pytorch instead of tensorflow.

在 release note 裡沒有提到原因,但這個頗讓人好奇的...

直接用 prompt 產生音樂的 Riffusion

很紅的 Stable Diffusion 是寫一串文字 (prompt) 然後產生圖片,而 Riffusion 則是寫一串文字產生音樂。

其中 prompt 轉成音樂其實還在可以預期的範圍 (i.e. 遲早會出現),但專案的頁面上解釋了 Riffusion 是基於 Stable Fusion 的作品,而且是利用 Stable Fusion 產生出時頻譜 (spectrogram):

Well, we fine-tuned the model to generate images of spectrograms, like this:

也就是像這樣的圖:

Hacker News 上討論時的討論頁可以看看,作者有參與一些討論:「Riffusion – Stable Diffusion fine-tuned to generate music (riffusion.com)」。

其中有人提到這個作法超出想像,因為輸出的圖片只要幾個 pixel 差一點點就會產生出很不同的聲音:

This really is unreasonably effective. Spectrograms are a lot less forgiving of minor errors than a painting. Move a brush stroke up or down a few pixels, you probably won't notice. Move a spectral element up or down a bit and you have a completely different sound. I don't understand how this can possibly be precise enough to generate anything close to a cohesive output.

Absolutely blows my mind.

然後其中一位作者回覆到,他也是做下去後才很意外發現居然可行:

Author here: We were blown away too. This project started with a question in our minds about whether it was even possible for the stable diffusion model architecture to output something with the level of fidelity needed for the resulting audio to sound reasonable.

實際上聽了產生出來的音樂,是真的還 OK 的音樂... 大家都完全沒想到可以這樣搞,然後在 Hacker News 上的 upvote 數量爆炸高 XD

可以自己調整的黑白照片上色服務

Hacker News Daily 上看到 Palette 這個服務,作者在 Hacker News 上有提到你可以提供一些句子調整顏色:「Show HN: I made a new AI colorizer (palette.fm)」。

Hi HN, I’m Emil, the maker behind Palette. I’ve been tinkering with AI and colorization for about five years. This is my latest colorization model. It’s a text-based AI colorizer, so you can edit the colorizations with natural language. To make it easier to use, I also automatically create captions and generate filters.

作者有把一些作品貼在 Reddit 上面,可以參考 https://www.reddit.com/user/emilwallner/?sort=top 這邊,看起來已經有一陣子了...