Intel 的 X86S 計畫

清一些連結時看到的,Intel 在 2023 年提出來的 X86S 計畫在進行,在 Linux kernel 裡面可以看到 commit 消息:「Intel continues prepping the Linux kernel for X86S」,引用的消息是「Intel Continues Prepping The Linux Kernel For X86S」。

從 Intel 的文件「Envisioning a Simplified Intel Architecture」這邊可以看到一些想法,看起來開機的部分沒有 real mode 了,直接進入 protected & 64-bit 環境,而對於舊架構的需求上會是透過 CPU exception 讓 OS 或是 VMM 處理,但不知道這樣做效能會損失多少...

再來就是微軟這邊的配合度,然後是定價,以及隔壁 AMD 要不要跟,這些綜合影響到最後面市場買不買單?(除了散戶以外,還包括像 AWS 這些雲端商...)

Nvidia CUDA 的 EULA 禁止跑在模擬層上面

Hacker News 上看到「Nvidia bans using translation layers for CUDA software to run on other chips (tomshardware.com)」這個討論串,原報導「Nvidia bans using translation layers for CUDA software — previously the prohibition was only listed in the online EULA, now included in installed files」。

文章最上面更新的這段把重點講完了,在 11.6 之後的版本就有加入這段文字了:

[Edit 3/4/24 11:30am PT: Clarified article to reflect that this clause is available on the online listing of Nvidia's EULA, but has not been in the EULA text file included in the downloaded software. The warning text was added to 11.6 and newer versions of the installed CUDA documentation.]

主要是這條:

You may not reverse engineer, decompile or disassemble any portion of the output generated using SDK elements for the purpose of translating such output artifacts to target a non-NVIDIA platform.

從「CUDA Toolkit Archive」這邊可以看到 11.6.0 是 2022 年一月的事情:

CUDA Toolkit 11.6.0 (January 2022), Versioned Online Documentation

看起來跟 ZLUDA 有關 (參考「IntelAMD GPU 直接跑 CUDA 程式的 ZLUDA」)。

時間軸看起來有對起來,2021 年的時候被 Intel 找去聊,2022 年的時候改跟 AMD 簽約,然後 AMD 後來也放掉:

In 2021 I was contacted by Intel about the development of ZLUDA. I was an Intel employee at the time. While we were building a case for ZLUDA internally, I was asked for a far-reaching discretion: not to advertise the fact that Intel was evaluating ZLUDA and definitely not to make any commits to the public ZLUDA repo. After some deliberation, Intel decided that there is no business case for running CUDA applications on Intel GPUs.

Shortly thereafter I got in contact with AMD and in early 2022 I have left Intel and signed a ZLUDA development contract with AMD. Once again I was asked for a far-reaching discretion: not to advertise the fact that AMD is evaluating ZLUDA and definitely not to make any commits to the public ZLUDA repo. After two years of development and some deliberation, AMD decided that there is no business case for running CUDA applications on AMD GPUs.

IntelAMD GPU 直接跑 CUDA 程式的 ZLUDA

先前提過「在 Intel 內顯上面直接跑 CUDA 程式的 ZLUDA」,結果後來事情大翻轉,AMD 跑去贊助專案,變成支援 AMD GPU 了:「AMD Quietly Funded A Drop-In CUDA Implementation Built On ROCm: It's Now Open-Source」,專案在 GitHubvosen/ZLUDA 這邊,而這包支援 AMD GPU 的 commit log 則是在 1b9ba2b2333746c5e2b05a2bf24fa6ec3828dcdf 這包巨大的 commit:

Nobody expects the Red Team

Too many changes to list, but broadly:
* Remove Intel GPU support from the compiler
* Add AMD GPU support to the compiler
* Remove Intel GPU host code
* Add AMD GPU host code
* More device instructions. From 40 to 68
* More host functions. From 48 to 184
* Add proof of concept implementation of OptiX framework
* Add minimal support of cuDNN, cuBLAS, cuSPARSE, cuFFT, NCCL, NVML
* Improve ZLUDA launcher for Windows

其中的轉折以及後續的故事其實還蠻不知道怎麼說的... 作者一開始在 Intel 上班,弄一弄 Intel 覺得這沒前景,然後 AMD 接觸後贊助這個專案,到後面也覺得沒前景,於是依照後來跟 AMD 的合約,如果 AMD 覺得沒前景,可以 open source 出來:

Why is this project suddenly back after 3 years? What happened to Intel GPU support?

In 2021 I was contacted by Intel about the development od ZLUDA. I was an Intel employee at the time. While we were building a case for ZLUDA internally, I was asked for a far-reaching discretion: not to advertise the fact that Intel was evaluating ZLUDA and definitely not to make any commits to the public ZLUDA repo. After some deliberation, Intel decided that there is no business case for running CUDA applications on Intel GPUs.

Shortly thereafter I got in contact with AMD and in early 2022 I have left Intel and signed a ZLUDA development contract with AMD. Once again I was asked for a far-reaching discretion: not to advertise the fact that AMD is evaluating ZLUDA and definitely not to make any commits to the public ZLUDA repo. After two years of development and some deliberation, AMD decided that there is no business case for running CUDA applications on AMD GPUs.

One of the terms of my contract with AMD was that if AMD did not find it fit for further development, I could release it. Which brings us to today.

這個其實還蠻好理解的,CUDA 畢竟是 Nvidia 家的 ecosystem,除非你反超越後自己定義一堆自家專屬的功能 (像是當年 MicrosoftIE 上的玩法),不然只是幫人抬轎。

Phoronix 在 open source 前幾天先拿到軟體進行測試,而他這幾天測試的結果給了「頗不賴」的評價:

Andrzej Janik reached out and provided access to the new ZLUDA implementation for AMD ROCm to allow me to test it out and benchmark it in advance of today's planned public announcement. I've been testing it out for a few days and it's been a positive experience: CUDA-enabled software indeed running atop ROCm and without any changes. Even proprietary renderers and the like working with this "CUDA on Radeon" implementation.

另外為了避免測試時有些測試軟體會回傳到伺服器造成資訊外洩,ZLUDA 在這邊故意設定為 Graphics Device,而在這次 open source 公開後會改回正式的名稱:

In my screenshots and for the past two years of development the exposed device name for Radeon GPUs via CUDA has just been "Graphics Device" rather than the actual AMD Radeon graphics adapter with ROCm. The reason for this has been due to CUDA benchmarks auto-reporting results and other software that may have automated telemetry, to avoid leaking the fact of Radeon GPU use under CUDA, it's been set to the generic "Graphics Device" string. I'm told as part of today's open-sourcing of this ZLUDA on Radeon code that the change will be in place to expose the actual Radeon graphics card string rather than the generic "Graphics Device" concealer.

作者的測試看起來在不同的測試項目下差異頗大,但如果依照作者的計算方式,整體效能跟 OpenCL 版本差不多:

Phoronix 那邊則是做了與 Nvidia 比較的測試... 這邊拿的是同樣都有支援 Nvidia 與 AMD 家的卡的 Blender 測試,然後跑出來的結果讓人傻眼,透過 ZLUDA 轉譯出來的速度比原生支援的速度還快,這 optimization 看起來又有得討論了:(這是 BMW27 的測試,在 Classroom 的測試也發現一樣的情況)

但即使如此,CUDA over AMD GPU 應該還是不會起來,官方會儘量讓各 framework 原生支援,而大多數的開發者都是在 framework 上面開發,很少會自己從頭幹...

Intel 對於在 E-cores 上面可以跑 AVX-512 指令集的計畫:AVX10.2

看到「Intel AVX10.2 ISA to enable AVX-512 capabilities on E-cores」這篇提到了 Intel 的技術文件「The Converged Vector ISA: Intel® Advanced Vector Extensions 10」,裡面提到了 Intel 後續對 AVX-512 的計畫。

主要是這張,可以看到在 AVX10.2 的規劃中會支援 E-cores:

不過目前還要等,這邊只放了一個 future 的說明:

目前的傳言是 2024 或 2025 會有 AVX10.1 在 Xeon 上出來:

Intel says that version 1 of the AVX10 vector ISA (AVX10.1) will first be implemented on Intel Xeon “Granite Rapids” processors that, according to some media reports, are expected to launch by 2024 or 2025, so it will likely take a long while before AVX10.2 is implemented on processors with E-cores.

但 AVX10.1 還沒有在 E-cores 上面執行 AVX512 的能力,所以 AVX10.2 應該是更後面...

Intel Arc 顯卡在 Machine Learning 上的運算

前面提到「AMD 平台上的 LLM 計算」,在「Testing Intel’s Arc A770 GPU for Deep Learning Pt. 2」這邊看到另外一家也在追趕的 Intel 對於自家顯卡 Intel Arc 在 ML 上的運算。

文章裡面是透過 Intel 自家的 OpenVINO 以及微軟的 DirectML 在存取顯卡資源。

這張最大記憶體是 16GB,對於 ML 訓練算是堪用?

話說 Intel N100 主機把 OpenCL 弄好後也可以跑 KataGo,當然速度沒有獨立顯卡那麼快,但比起純粹用 CPU 計算的速度還是快不少...

Amazon EC2 推出新世代 Intel CPU,以及奇怪的 Flex 產品

Jeff Barr 公告了 Amazon EC2 推出了新的 Intel CPU 產品線:「New Seventh-Generation General Purpose Amazon EC2 Instances (M7i-Flex and M7i)」。

先講 m7i,這個比較好理解,就是 Intel 新的 CPU,然後很隱晦的只宣稱了「比其他雲端廠商的 Intel CPU 快 15%」:

Today we are launching Amazon Elastic Compute Cloud (Amazon EC2) M7i-Flex and M7i instances powered by custom 4th generation Intel Xeon Scalable processors available only on AWS, that offer the best performance among comparable Intel processors in the cloud – up to 15% faster than Intel processors utilized by other cloud providers.

而與前一代的 Intel 機種相比 (應該是 m6i?) 則是高了 15% 的 CP 值:

The M7i instances are available in nine sizes (with two size of bare metal instances in the works), and offer 15% better price/performance than the previous generation of Intel-powered instances.

另外這次最有趣 (但未必好用) 的是推出了 m7i-flex,宣稱再「很多情境下」比 m6i 的 CP 值高了 19%:

M7i-Flex instances are available in the five most common sizes, and are designed to give you up to 19% better price/performance than M6i instances for many workloads.

這邊有提到 m7i-flex 是把 m7i 砍了 5% 價錢:

The M7i-Flex instances are a lower-cost variant of the M7i instances, with 5% better price/performance and 5% lower prices. They are great for applications that don’t fully utilize all compute resources. The M7i-Flex instances deliver a baseline of 40% CPU performance, and can scale up to full CPU performance 95% of the time.

但在「General purpose instances」這邊比較清楚,但可以跑超過 40% CPU 的時間限制在 24 小時內的 95%:

For times when workloads need more performance, M7i-flex instances provide the ability to exceed baseline CPU and deliver up to 100 percent CPU for 95 percent of the time over a 24-hour window.

這個設計頗微妙的,旁邊 ARM-based 的 t4g 就先不提了,至少不是 drop-in replacement...

m7i.large (2 vCPU + 8GB RAM) 在 us-east-1 的價錢是 $0.1008/hr,你可以完全沒有阻礙的用他 100% 的 CPU。

m7i-flex.large 是 $0.09576/hr,剛好省 5%,代價是有 5% 的時間必須壓在 40% 以下。

而拿 t3 系列的機器來比較,t3.large 也是有 2 vCPU + 8GB RAM,但他內建的 baseline CPU 是 30%,價錢則是 $0.0832/hr。

從這邊可以看出來,大多數的小應用還是會往 t3 甚至是 t3a 丟。

只有 24 小時的平均 loading 超過 40%,但又不是 24 小時都超過 40% 的應用,也就是現在應該是在 m6am6i 上面跑的,才有可能會評估 m7i-flex

更不用說 AWS 上 m6a 的收費比 Intel 的 m6i 少了 10% 啊,這個產品定位在細算後很微妙:應該是有他可以出現的地方,但怎麼算都不多... 有點像是 AWS 跟 Intel 交代的產品?

但順著邏輯,這個方法其實是 billing-based 的方案,跟技術沒有太多關係,如果 Intel 可以做,那麼 AMD 與 ARM 應該也遲早會出現?

看起來像是 t 系列產品的延伸,但好像可以再等等,看會不會在 AMD 與 ARM 的產品線上推出類似的東西?

華碩接手 Intel NUC 的維護與後續研發

Intel 發新聞稿,說跟 ASUS 簽了約,讓 ASUS 接手 Intel NUC 的維護與產品研發了:「Intel and ASUS Agree to Term Sheet to Take Intel NUC Systems Product Line Forward」。

Today, Intel announced it has agreed to a term sheet with ASUS, a global technology solution provider, for an agreement to manufacture, sell and support the Next Unit of Compute (NUC) 10th to 13th generations systems product line, and to develop future NUC systems designs.

不過 ASUS 這邊網站上還沒看到新聞稿...

前幾天 Intel 宣佈要停止 Intel NUC 產品線,但當時沒看到接手的消息:「Intel Exiting the PC Business as it Stops Investment in the Intel NUC」。

有可能是消息出來後 ASUS 才去談的?或是一開始 Intel 就有找 partner 談,但談到一半消息先走漏了,但外面只聽到 Intel 不做?

Anyway,這樣 ASUS 算是官方指名的繼承人,讓後續要買的人可以過去看看;以 ASUS 本來就有自己的便當盒產品線 (像是 PChome 24h 上面的「ASUS-特規/改機專區 品牌館」),就是讓現有的團隊接手一個知名的產品線,不過本來跟精英 & 和碩的代工不知道會出現什麼變化...

在 Intel 內顯上面直接跑 CUDA 程式的 ZLUDA

Hacker News 首頁上看到的有趣東西:「Zluda: Run CUDA code on Intel GPUs, unmodified (github.com/vosen)」,專案在「CUDA on Intel GPUs」這邊,這是個最後更新在 2021 年的專案。

這個專案的想法可以猜得出來,想要吃 CUDA 的 ecosystem,把現有用 CUDA 的應用程式直接跑在 Intel 的 GPU 上面,這樣對於一些只有 CUDA 卻沒有 OpenCL 的實作就有機會拿來用。

一開始本來以為是給 Intel 新的獨立顯卡 Arc,結果發現是 2021 年就停更的專案,是以內顯來測試的:

ZLUDA performance has been measured with GeekBench 5.2.3 on Intel UHD 630.

從 benchmark 的結果看起來,大多數的功能應該都有 porting 上去,所以至少測試是能跑的,而不是 crash:

不過 Hacker News 的討論上可以看到似乎還是有問題,而且大多數的 AI 應用還是會回頭支援 OpenCL,似乎沒有那麼好用...

Intel 也出了字型

Hacker News 上看到「Intel One Mono Typeface (github.com/intel)」這個,Intel 也出了字型,以 open source license 釋出,GitHub 頁面在「Intel One Mono Typeface」這邊。

這次被拿到 Hacker News 上講的是 monospace font,官方提供的 sample 長這樣:

先前 Intel 有出過 Clear Sans,是以 Apache License, Version 2.0 釋出,不過從業面上可以看到已經不再繼續維護了。

蠻多廠商都有出自己品牌的字型,像是 IBMPlex 系列,或是 Red Hat 的「Red Hat Typeface Files」。自己有做 OS 的廠商就更多了 (像是 MicrosoftApple 以及 Ubuntu)。

現在換字型算是看心情的,一段時間就換一個...

Intel 用 AVX-512 加速 NumPy 的排序演算法被整合進主線了

IntelAVX-512 加速 NumPy 排序的實做被整合進主線了:「「Intel Publishes Blazing Fast AVX-512 Sorting Library, Numpy Switching To It For 10~17x Faster Sorts」」。

GitHub 的 PR 在「ENH: Vectorize quicksort for 16-bit and 64-bit dtype using AVX512 #22315」這邊,可以看到相關的留言:

This patch adds AVX512 based 64-bit on AVX512-SKX and 16-bit sorting on AVX512-ICL. All the AVX512 sorting code has been reformatted as a separate header files and put in a separate folder. The AVX512 64-bit sorting is nearly 10x faster and AVX512 16-bit sorting is nearly 16x faster when compared to std::sort. Still working on running NumPy benchmarks to get exact benchmark numbers

16-bit int sped up by 17x and float64 by nearly 10x for random arrays. Benchmarked on a 11th Gen Tigerlake i7-1165G7.

有點「有趣」的情況是,AVX-512 在新的 Intel 消費級 CPU 被拔掉了,只有伺服器工作站的 CPU 有保留。而 AMDZen 4 則是跳下去支援 AVX-512...

另外在「Intel Publishes Fast AVX-512 Sorting Library, 10~17x Faster Sorts in NumPy (phoronix.com)」這邊當然也有人提到,如果用更廣泛的 AVX2 (寬度是 256bits) 加速的話應該也會有很大的進步才對?幾乎這十年的 CPU 都有 AVX2 了... 不過看起來沒有什麼深入的討論?