There is no evidence to suggest that the companies identified in the images had any knowledge that a part of their project had been subcontracted to North Korean animators. In fact, as the editing comments on all the files, including those related to US-based animations, were written in Chinese, it is likely that the contracting arrangement was several steps downstream from the major producers.
Happy April 1st! This post is part of April Cools Club: an April 1st effort to publish genuine essays on unexpected topics. Please enjoy this true story, and rest assured that the tech content will be back soon!
文章裡面提到作者的老爸是工程師,他老爸自己開的公司跟家距離算近,所以他老爸用指向性天線,把公司的商用 internet 橋接到家裡用,用了十年都沒什麼問題,但最近幾個禮拜開始只有下雨天才會通?
We replaced our old 802.11g devices with new 802.11n ones, which took advantage of new magic math and physics to make signals more resistant to interference.
另外作者文章用的子標題應該是與 Five stages of grief 相關,不過文章裡是 Denial-Bargaining-Determination-Debugging-Realization。
Along with the Pulse app, there is the second part of the application. A Node.js app reads CSV files populated with energy usage data and displays them to the user in the web UI. It uses Node.js 0.10.26, Express.js 4.13.3 and Socket.io 1.3.6.
If you use zsh and have trouble running llamafile, try saying sh -c ./llamafile. This is due to a bug that was fixed in zsh 5.9+. The same is the case for Python subprocess, old versions of Fish, etc.
On Linux, Nvidia cuBLAS GPU support will be compiled on the fly if (1) you have the cc compiler installed, (2) you pass the --n-gpu-layers 35 flag (or whatever value is appropriate) to enable GPU, and (3) the CUDA developer toolkit is installed on your machine and the nvcc compiler is on your path.
但可以看到沒有被 offload 到 GPU 上面:
llm_load_tensors: ggml ctx size = 0.11 MB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: mem required = 4165.47 MB
llm_load_tensors: offloading 0 repeating layers to GPU
llm_load_tensors: offloaded 0/35 layers to GPU
llm_load_tensors: VRAM used: 0.00 MB
嘗試了不同的方法,發現要跑 sh -c "./llamafile --n-gpu-layers 35",也就是把參數一起包進去,這樣就會出現對應的 offload 資訊,而且輸出也快很多:
llm_load_tensors: ggml ctx size = 0.11 MB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: mem required = 70.42 MB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 35/35 layers to GPU
llm_load_tensors: VRAM used: 4095.05 MB
Issuer: (CA ID: 276)
commonName = DST Root CA X3
organizationName = Digital Signature Trust Co.
Validity
Not Before: Jan 20 19:14:03 2021 GMT
Not After : Sep 30 18:14:03 2024 GMT
Subject: (CA ID: 7394)
commonName = ISRG Root X1
organizationName = Internet Security Research Group
countryName = US
On Thursday, Feb 8th, 2024, we will stop providing the cross-sign by default in requests made to our /acme/certificate API endpoint. For most Subscribers, this means that your ACME client will configure a chain which terminates at ISRG Root X1, and your webserver will begin providing this shorter chain in all TLS handshakes. The longer chain, terminating at the soon-to-expire cross-sign, will still be available as an alternate chain which you can configure your client to request.
On Thursday, June 6th, 2024, we will stop providing the longer cross-signed chain entirely. This is just over 90 days (the lifetime of one certificate) before the cross-sign expires, and we need to make sure subscribers have had at least one full issuance cycle to migrate off of the cross-signed chain.
最後就是過期的日子 2024/09/30:
On Monday, September 30th, 2024, the cross-signed certificate will expire. This should be a non-event for most people, as any client breakages should have occurred over the preceding six months.
There's a Taiwanese brand Vivotek that makes some relatively affordable NDAA cameras. I'm hoping to try them out myself, but unfortunately haven't had time yet.