Smart speakers collect voice input that can be used to infer sensitive information about users. Given a number of egregious privacy breaches, there is a clear unmet need for greater transparency and control over data collection, sharing, and use by smart speaker platforms as well as third party skills supported on them. To bridge the gap, we build an auditing framework that leverages online advertising to measure data collection, its usage, and its sharing by the smart speaker platforms.
We evaluate our framework on the Amazon smart speaker ecosystem. Our results show that Amazon and third parties (including advertising and tracking services) collect smart speaker interaction data. We find that Amazon processes voice data to infer user interests and uses it to serve targeted ads on-platform (Echo devices) as well as off-platform (web). Smart speaker interaction leads to as much as 30X higher ad bids from advertisers. Finally, we find that Amazon's and skills' operational practices are often not clearly disclosed in their privacy policies.
幾個比較重要的資訊,其中一個是「Network traffic distribution by persona, domain name, purpose, and organization」:
Kagi will come as a free version with limited use; and an unlimited use, paid option at $10 a month, both versions having great search results with less spam and completely ad-free, tracking free, and with none of your search data being retained.
Kagi 已經是我目前預設的 search engine,而且品質其實相當滿意 (偶而會切到 DuckDuckGo 以及 Google 比較),之後就等付費機制上線...
We do eviction based on an algorithm called “least recently used” or LRU. This means that the least-requested content can be evicted from cache first to make space for more popular content when storage space is full.
The Cache Reserve Plan will mimic the low cost of R2. Storage will be $0.015 per GB per month and operations will be $0.36 per million reads, and $4.50 per million writes.
另外還有還沒公告的 Cache Reserve 的部份:
(Cache Reserve pricing page will be out soon)
對於很極致想要拼 hit rate 的使用者來說是個選擇就是了,另外可以想到直播相關的協定 (像是 HLS) 好像可以這樣搞來壓低對 origin server 的壓力?
從一些描述上可以看出來,應該是因為 Datacenter 端的動力推動的,所以這次 open source 的版本中,對 Datacenter GPU 的支援是 production level,但對 GeForce GPU 與 Workstation GPU 的支援直接掛 alpha level:
Which GPUs are supported by Open GPU Kernel Modules?
Open kernel modules support all Ampere and Turing GPUs. Datacenter GPUs are supported for production, and support for GeForce and Workstation GPUs is alpha quality. Please refer to the Datacenter, NVIDIA RTX, and GeForce product tables for more details (Turing and above have compute capability of 7.5 or greater).
然後 user-mode driver 還是 closed source:
Will the source for user-mode drivers such as CUDA be published?
These changes are for the kernel modules; while the user-mode components are untouched. So the user-mode will remain closed source and published with pre-built binaries in the driver and the CUDA toolkit.
對 nouveau 來說,是可以從 open source driver 裡面挖一些東西出來用,不過能挖到跟 proprietary 同樣效能水準嗎?
This Amazon EFS update increases the number of simultaneous file locks an NFS mount can acquire to 65,536 (from 8,192 previously), enabling Amazon EFS to be used for a broader set of applications that heavily leverage file locking (including message broker and distributed analytics applications).
export default {
async fetch(request, env, ctx) {
const { pathname } = new URL(request.url)
if (pathname === '/num-products') {
const { result } = await env.DB.get(`SELECT count(*) AS num_products FROM Product;`)
return new Response(`There are ${result.num_products} products in the D1 database!`)
}
}
}
然後從 screenshot 上沒有看到 region,但是 class 那邊出現了一個 tokyo3 不知道是什麼東西: