在 Minecraft 裡面幹出一台完整的電腦

Lobsters Daily 上看到有強者在 Minecraft 實做邏輯電路,幹出一台完整的電腦出來 (CPU 的部份應該是 turing-complete 了):

PCWorld 有報導:「This 8-bit processor built in Minecraft can run its own games」。

把影片裡的描述截圖出來:

連分支預測器都出現了:

記憶體就 Minecraft 來說也是超大的 256 bytes:

然後還做了 cache 層,這邊提到的是 data cache:

然後這邊是 instruction cache:

也因為已經相當的 powerful,很多經典遊戲都可以玩,像是俄羅斯方塊:

貪食蛇:

打磚塊:

Connect Four

電子紙的螢幕 BOOX Mira Series

看到「BOOX Mira Series」這個,主要是因為現在公司的老闆是電子墨水的發明人之一 (也是 E Ink 的 cofounder),所以看到一些電子墨水的新聞也會覺得有趣...

比較小的 BOOX Mira 是 13.3" 的 4:3 螢幕,但 resolution 也已經拉到 2200x1650 了,207 ppi,走 Mini HDMI 或是 Type C 輸入,賣 US$799,從圖片上看起來像是攜帶性的。

而比較大的 BOOX Mira Pro 則是 25.3" 的 16:9 螢幕 (3200x1800),145 ppi,可以走 HDMI/Mini HDMI/DP 或是 Type C,價錢則是 US$1799,從圖片上看起來是放在桌面上的。

因為電子紙的特性,畫面的更新速度不像傳統螢幕那樣,要想一下使用的情境...

AWS 將新的 Nitro 架構回過投來支援以前 Xen 的機種

Twitter 上看到 Jeff Barr 提到的這篇,講 AWS 決定讓新的 Nitro 架構支援舊的 Xen 機種:

原文是「Xen-on-Nitro: AWS Nitro for Legacy Instances」這篇,裡面雖然很美化的在講這件事情,但提到了幾個很現實的問題,第一個是仍然有大量使用者 (120 萬) 在用 Xen 架構的機器:

Today, we still have over 1.2 Million unique customers using Xen-based instances.

但這些機器其實愈來愈難維護,一方面是 Nitro 讓 AWS 省下很多軟體上的維護,另外一方面是幾乎不會有新的使用者用這些舊機種,在採購上面也會是問題。

However, the underlying hardware is old and it’s getting increasingly difficult to maintain support for these older hypervisor systems.

所以 EC2 的團隊把 Nitro 的 Xen 相容架構給實做出來,從 2022 年開始就可以全部都用 Nitro 系統,這樣對 EC2 團隊的維護成本就會大幅下降:

All of these innovations enable us to continue to offer many of our older instance types well past the lifetime of the original hardware. Starting in 2022, customers launching M1, M2, M3, C1, C3, R3, I2 and T1 instances will land on Nitro supported instances hardware and existing running instances will also be migrated.

技術債沒辦法消失,就用這種方式降低維護成本耗 XD

AWS 要推出 Graviton3 的機種了

AWS 打算要推出 Graviton3 的機種了,目前還在 preview 階段:「Join the Preview – Amazon EC2 C7g Instances Powered by New AWS Graviton3 Processors」。

目前是宣稱與前一代的 Graviton2 相比有 25% 的效能提昇,另外在浮點數與密碼相關的運算上面也會有改善 (這個效能提昇的數字應該是有指令集的幫助):

In comparison to the Graviton2, the Graviton3 will deliver up to 25% more compute performance and up to twice as much floating point & cryptographic performance. On the machine learning side, Graviton3 includes support for bfloat16 data and will be able to deliver up to 3x better performance.

另外提到了 signed pointer,可以避免 stack 被搞,不過這邊需要 OS 與 compiler 的支援,算是針對 stack 類的攻擊提出的防禦方案:

Graviton3 processors also include a new pointer authentication feature that is designed to improve security. Before return addresses are pushed on to the stack, they are first signed with a secret key and additional context information, including the current value of the stack pointer. When the signed addresses are popped off the stack, they are validated before being used. An exception is raised if the address is not valid, thereby blocking attacks that work by overwriting the stack contents with the address of harmful code. We are working with operating system and compiler developers to add additional support for this feature, so please get in touch if this is of interest to you.

然後是使用 DDR5 的記憶體:

C7g instances will be available in multiple sizes (including bare metal), and are the first in the cloud industry to be equipped with DDR5 memory. In addition to drawing less power, this memory delivers 50% higher bandwidth than the DDR4 memory used in the current generation of EC2 instances.

現在還沒看到價錢,不過有可能是跟 c6g 一樣的價位?但考慮到記憶體換架構,也有可能是貴一些的?

另外翻了一下資料,ARM 有發表過新聞稿提到 Graviton2 是 ARM 的 Cortex-M55 機種:「Designing Arm Cortex-M55 CPU on Arm Neoverse powered AWS Graviton2 Processors」,這次的 Graviton3 應該在之後完整公開後會有更多消息出來...

來看 Intel + Varnish 的單機 500Gbps 的 PR 新聞稿

在「Varnish Software Achieves 500Gbps Throughput Per Server for UHD Video Content」這邊看到 PR 稿,由 IntelVarnish 合作,宣稱達到單機 500Gbps 的 throughput 了:

According to Varnish Software, the following were the outcomes of the test:

  • 509.7 Gbps live-linear throughput, using a dual-processor configuration
  • 487.2 Gbps video-on-demand throughput, using a dual-processor configuration

白皮書在「Delivering up to 500 Gbps Throughput for Next-Gen CDNs」這頁可以用個資交換下載,不過用搜尋引擎找一下可以發現 Intel 那邊有放出 PDF (但不確定兩邊給的是不是同一份):「Delivering up to 500 Gbps Throughput for Next-Gen CDNs」。

單 CPU 的伺服器是四個 100Gbps 界面接出來,雙 CPU 的伺服器是八個 (這邊 SUT 是 system under test 的縮寫):

These client systems were connected to the CDN servers using 100 GbE links through a switch; 4x100 GbE connections for the single-processor SUT, and 8x100 GbE for the dualprocessor SUT. Testing was done using Wrk, a widely recognized open-source HTTP(S) benchmarking tool.

不過如果實際看圖會發現伺服器是兩個 100Gbps (單 CPU) 與四個 100Gbps (雙 CPU),然後 wrk 也吃了兩個或是四個 100Gbps:

在白皮書最後面也有提到測試的配置,都是在 Ubuntu 20.04 上面跑,單 CPU 用的是兩張 Intel 的 100Gbps 網卡,雙 CPU 的用的是四張 Mellanox 的 100Gbps 網卡:

3rd generation Intel Xeon Scalable testing done by Intel in September 2021. Single processor SUT configuration was based on the Supermicro SMC 110P-WTR-TNR single socket server based on Intel® Xeon® Platinum 8380 processor (microcode: 0xd000280) with 40 cores operating at 2.3 GHz. The server featured 256 GB of RAM. Intel® Hyper-Threading Technology was enabled, as was Intel® Turbo Boost Technology 2.0. Platform controller hub was the Intel C620. NUMA balancing was enabled. BIOS version was 1.1. Network connectivity was provided by two 100 GbE Intel® Ethernet Network Adapters E810. 1.2 TB of boot storage was available via an Intel SSD. Application storage totaled 3.84TB per drive and was provided by 8 Intel P5510 SSDs. The operating system was Ubuntu Linux release 20.04 LTS with kernel 5.4.0-80 generic. Compiler GCC was version 9.3.0. The workload was wrk/master (April 17, 2019), and the version of Varnish was varnishplus-6.0.8r3. Openssl v1.1.1h was also used. All traffic from clients to SUT was encrypted via TLS.

3rd generation Intel Xeon Scalable testing done by Intel in September 2021. Dual processor SUT configuration was based on the Supermicro SMC 22OU-TNR dual socket server based on Intel® Xeon® Platinum 8380 processor (microcode: 0xd000280) with 40 cores operating at 2.3 GHz. The server featured 256 GB of RAM. Intel® Hyper-Threading Technology was enabled, as was Intel® Turbo Boost Technology 2.0. Platform controller hub was the Intel C620. NUMA balancing was enabled. BIOS version was 1.1. Network connectivity was provided by four 100 GbE Mellanox MCX516A-CDAT adapters. 1.2 TB of boot storage was available via an Intel SSD. Application storage totaled 3.84TB per drive and was provided by 12 Intel P5510 SSDs. The operating system was Ubuntu Linux release 20.04 LTS with kernel 5.4.0-80- generic. Compiler GCC was version 9.3.0. The workload was wrk/master (April 17, 2019), and the version of Varnish was varnish-plus6.0.8r3. Openssl v1.1.1h was also used. All traffic from clients to SUT was encrypted via TLS.

不過馬上就會滿頭問號,四張 100Gbps 是怎麼跑到 500Gbps 的頻寬...

這份 PR 馬上就讓人想到 Netflix 先前放出來的投影片 (先前有在「Netflix 在單機服務 400Gbps 的影音流量」這篇提到),在 Netflix 的投影片裡面有提到他們在 Intel 平台上面受限於記憶體的頻寬,整台機器只能跑到 230Gbps。

另外一種猜測是,如果 Intel 與 Varnish 宣稱的 500Gbps 是算 switch 上的總流量 (有這樣算的嗎,你是 Juniper 嗎...),那這邊的 500Gbps 換算回去差不多就是減半 (還很客氣的沒把 cache 沒中需要去 origin server 拉資料的流量扣掉),跟 Netflix 在 FreeBSD 上跑出來的結果差不多啊...

坐等反駁 XDDD

莫斯科大學的學生自己在宿舍架設宿舍網路的歷史故事 (2002~2013)

Lobster Daily 看到「Moscow state university network built by students」這篇,不過 Hacker News 上的討論多一點:「“Illegal” Moscow state university network built by students (2002-2013) (medium.com/pv.safronov)」。

莫斯科大學宿舍網路的故事,發生在 2002 年到 2013 年之間:

They had built a network on their own, which provided students Internet connection from 2002 till 2013 (when the administration effectively legalized this network).

算是老人懷舊的故事,不過看起來作者留下了大量的照片看起來就更有趣了...

查了一下資料,交大應該是 1993 年 1992 年就開始有宿舍網路了 (可能更早?),而且是校方用同軸電纜 (Coaxial cable) 拉的,後來經過更新也換到雙絞線了 (UTP,Twisted pair),現在應該是 1Gbps 的頻寬,不過不知道建築物之間是怎麼放的了。(大概還是光纖,依照集縮比應該是跑 10Gbps 的協定,S 系列的那幾個可能距離都太短,大概是 L 的吧,不知道是幾對...)

首先這是個橫跨整個主建築物的網路建制,分成五個大區塊:

Google Maps 拉一下距離,光是兩端的直線距離就快要 0.5km 了,應該沒找錯建築物,看起來是這棟沒錯...

從作者提供的圖片也可以看出來建築物本身超大:

回到網路的部份,本來是用 10Mbps 的 Hub 接 (算是 2002 年比較好取得的網路設備?),但因為是 Hub 所以頻寬使用率不高 (Collision domain 的效率下降),換成 Switch 後降低了很多不必要的流量:

In the beginning, there were only 10 Mbps hubs available. They were extremely slow because the packet arriving at one port was mirrored to all other ports, flooding the network with unnecessary traffic. Later they were replaced by L2 switches, that are smart enough to choose the destination port for the packet.

但宿舍網路是有競爭對手的,兩邊常常破壞對方 XDDD

Having two competing networks was a cherry on the top of it. They were cutting each other’s cables, executing DDoS attacks, stealing equipment, etc.

然後是拉光纖對接幾個主要的區塊,以上面剛剛拉出來的數字來看,應該是因為雙絞線 (UTP,Twisted pair) 的 100 公尺長度限制,所以這部份必須透過光纖:

To connect this router to the other parts of the network, we laid fiber cables through secret passages in the walls, ceilings, floors, and ventilation shafts.

然後用 Switch 再拉到每個房間 (所謂的 Last mile):

然後外部就沒在怕你的拉:

還包括了把 UTP 的八芯線分開用的方法,這邊提到了 100Mbps 應該是升級上去了:

Most of the time, we managed to reduce the number of cables coming out of the window down to ~15. We could do this because of a simple trick: the regular ethernet cable (Cat 5e) has 8 wires. And to transmit 100 Mbps, you need just 4 of them. All 8 wires are required only for a 1 Gbps connection. So using a single cable, you can connect 2 clients who live close to each other. We always tried to do that if possible because the hole can’t fit so many cables.

往後拉一點的樣子:

後面講到很多故事以及常見的 trouble shooting 問題,現在的網路設備能支援的架構好太多,應該都有比較好的解法了...

繞過 Web 上「防機器人」機制的資料

這兩天的 Hacker News 冒出一些討論在講 Web 上「防機器人」機制要怎麼繞過:

第一篇主要是從各種面向都一起討論,從大方向的分類討論 (「Where to begin building undetectable bot?」),另外介紹目前有哪些產品 (在「List of anti-bot software providers」這邊)。

在文章裡有提到一個有意思的工具「puppeteer-extra-plugin-stealth」,主要是在 Node.js 類的環境,查了一下在 Python 上也有 pyppeteer-stealth,不過 Python 版本直接講了不完美 XDDD

Transplanted from puppeteer-extra-plugin-stealth, Not perfect.

第二篇文章在開頭就提到他不是很愛 Proxy,因為 Proxy 很容易偵測。在文章最後面則是提到了兩個方案,第一個是用大量便宜的 Android 手機加上 Data SIM 來跑,另外一個是直接用 Android 模擬器加上 4G 網卡跑。

依照這些想法,好像可以來改善一下手上的 RSS 工具...

不使用 Google 服務的 Android 手機

一樣是在 Lobsters Daily 上翻到的,去 Google 服務的 Android 系統搞法:「Lineage with microG on a Sony XA2」。

主要是看關鍵字的部份,TWRP 換掉 recovery image,然後 LineageOS 是系統底,microG 是 open source 版本的 Google 專屬 API 的相容層,Magisk 則是負責 root 相關的事情,F-Droid 是 open source 軟體的 app store,可以用他來裝 Aurora Store,就可以裝 Play Store 裡的 app。

會這樣搞的人主要還是考慮到 privacy,可以預期有不少應用程式是不會動的...

用 USB 供電的 CR2032/CR2016

在「USB board emulates CR2032 or CR2016 coin cell battery」這邊看到的有趣東西,用 USB 供 5V 電進來,然後轉成 3V 透過 CR2032 或是 CR2016 的鈕扣電池形式提供電源:

看說明是可以拿來開發測試用:

You can now develop CR2032 or CR2016 powered devices without having to use an actual coin cell thanks to Peter Misenko’s (Bobricius) “coin cell battery emulator CR2016/CR2032”.

看起來只要旁邊的延伸段不要卡住就可以這樣玩,不過一般開發測試板應該是有 pin 腳可以灌 3V 進去啦,算是蠻有趣的東西就是了...

Framework 筆電也遇到缺料問題,換了音源晶片

Framework 的筆電最近在社群很紅,模組化設計讓維修變容易,而且也有許多規格上的客製化空間。在「Marketplace」這頁可以看到很多東西可以換,除了比較常見的無線網卡、SSD、記憶體以外,像是主機板、鍵盤甚至連 USB、HDMI 接口都是模組。

不過這邊要提到的是 audio chip 也在這波 supply chain 的供貨問題而中招了:「Solving for Silicon Shortages」,Hacker News 上的討論「Framework: Solving for Silicon Shortages (frame.work)」也可以看一下。

從文章裡看起來是 Realtek ALC295 的交期爆炸了:

Chips that would normally have 16-20 week lead times (meaning we’d place typically binding orders that far ahead of needing parts in our hands) went up to 52 weeks. In one case, we even got notified of a 68 week lead time on a chip!

We were able to get enough Realtek ALC295 audio CODECs to develop the Framework Laptop and get through the first few months of production, but nowhere near enough to fulfill ongoing demand from the US and Canada, let alone the additional countries we’d like to ship to.

所以決定換到 Tempo 92HD95B

Luckily, we were able to find an alternative CODEC that lets us stay in production: the Tempo 92HD95B.

查了一下 datasheet,本來的 Realtek ALC295 是 QFN-48,而 Tempo 92HD95B 是 QFN-40,看起來得改不少東西... 應該是連 open market 上都翻不到而被迫換設計,跟我們家的情況也很像,看起來最近大家都哭到爆炸了 :o