Not guaranteed. Notably, some implementations of sscanf are O(N), where N = std::strlen(buffer) [1]. For performant string parsing, see std::from_chars.
Those catering to corporate clients intend to continue support Python 2.7 for a while. In October, Red Hat said it will stop supporting Python 2.7 in RHEL 8 come June 2024.
把現有的安全性更新都開啟後,Intel CPU 的效能掉了 20% 左右 (在 Intel 上需要把 HT 關掉):
While the impacts vary tremendously from virtually nothing too significant on an application-by-application level, the collective whack is ~15-16 percent on all Intel CPUs without Hyper-Threading disabled. Disabling increases the overall performance impact to 20 percent (for the 7980XE), 24.8 percent (8700K) and 20.5 percent (6800K).
The AMD CPUs are not tested with HT disabled, because disabling SMT isn’t a required fix for the situation on AMD chips, but the cumulative impact of the decline is much smaller. AMD loses ~3 percent with all fixes enabled
This ranged from Rodinia scientific OpenMP tests taking 30% longer to Java-based DaCapo tests taking up to ~50% more time to complete to code compilation tests taking measurably longer to lower PostgreSQL database server performance to longer Blender3D rendering times.
另外在其他 Intel CPU 上測試也發現不是只有 i9 有影響,低階的機器也是:
Those affected systems weren't high-end HEDT boxes but included a low-end Core i3 7100 as well as a Xeon E5 v3 and Core i7 systems.
透過 bisect 有找到是哪個 commit 造成的:
That change is "STIBP" for cross-hyperthread Spectre mitigation on Intel processors. STIBP is the Single Thread Indirect Branch Predictors (STIBP) allows for preventing cross-hyperthread control of decisions that are made by indirect branch predictors.
To understand the KPTI overhead, there are at least five factors at play. In summary:
Syscall rate: there are overheads relative to the syscall rate, although high rates are needed for this to be noticable. At 50k syscalls/sec per CPU the overhead may be 2%, and climbs as the syscall rate increases. At my employer (Netflix), high rates are unusual in cloud, with some exceptions (databases).
Context switches: these add overheads similar to the syscall rate, and I think the context switch rate can simply be added to the syscall rate for the following estimations.
Page fault rate: adds a little more overhead as well, for high rates.
Working set size (hot data): more than 10 Mbytes will cost additional overhead due to TLB flushing. This can turn a 1% overhead (syscall cycles alone) into a 7% overhead. This overhead can be reduced by A) pcid, available in Linux 4.14, and B) Huge pages.
Cache access pattern: the overheads are exacerbated by certain access patterns that switch from caching well to caching a little less well. Worst case, this can add an additional 10% overhead, taking (say) the 7% overhead to 17%.
重點在於給了量測的方式,以第一個 Syscall rate 來說好了,他用 sudo perf stat -e raw_syscalls:sys_enter -a -I 1000 測試而得到程式的 syscall 數量,然後得到下面的表格,其中 X 軸是每秒千次呼叫數,Y 軸是效能損失:
We can see that in CPU-bound workloads the overhead is 20-25%, reaching up to 30% in point select queries. In IO-bound (25G buffer pool) workloads, the observed overhead is 15-20%.
Spectre 的精華在於 CPU 支援 branch prediction 與 out-of-order execution,也就是 CPU 遇到 branch 時會學習怎麼跑,這個資訊提供給 out-of-order execution 就可以大幅提昇執行速度。可以參考以前在「CPU Branch Prediction 的成本...」提到的效率問題。
原理的部份可以看這段程式碼:
這類型程式碼常常出現在現代程式的各種安全檢查上:確認 x 沒問題後再實際將資料拉出來處理。而我們可以透過不斷的丟 x 值進去,讓 CPU 學到以為都是 TRUE,而在 CPU 學壞之後,突然丟進超出範圍的 x,產生 branch misprediction,但卻已經因為 out-of-order execution 而讓 CPU 執行過 y = ... 這段指令,進而導致 cache 的內容改變。
Suppose register R1 contains a secret value. If the speculatively executed memory read of array1[R1] is a cache hit, then nothing will go on the memory bus and the read from [R2] will initiate quickly. If the read of array1[R1] is a cache miss, then the second read may take longer, resulting in different timing for the victim thread.
所以相同道理,利用乘法器被佔用的 timing attack 也可以產生攻擊:
if (false but mispredicts as true)
multiply R1, R2
multiply R3, R4
In addition, of the three user-mode serializing instructions listed by Intel, only cpuid can be used in normal code, and it destroys many registers. The mfence and lfence (but not sfence) instructions also appear to work, with the added benefit that they do not destroy register contents. Their behavior with respect to speculative execution is not defined, however, so they may not work in all CPUs or system configurations.
However, we may manipulate its generation to control speculative execution while modifying the visible, on-stack value to direct how the branch is actually retired.