Mac gamers have been dreaming about DirectX 12 support on macOS for years, and the open source community toils tirelessly making baby steps. Then Apple just drops a fully complete DX12 translation layer and doesn't even tell us! pic.twitter.com/cs8OxFJ64s
From a fun side project just a few months ago, ggml has now become a useful library and framework for machine learning with a great open-source community
In summary -- their RabbitMQ consumer library and config is broken in that their consumers are fetching additional messages when they shouldn't. I've never seen this in years of dealing with RabbitMQ. This caused a cascading failure in that consumers were unable to grab messages, rightfully, when only one of the messages was manually ack'ed. Fixing this one fetch issue with their consumer would have fixed the entire problem. Switching to pg probably caused them to rewrite their message fetching code, which probably fixed the underlying issue.
>To make all of this run smoothly, we enqueue and dequeue thousands of jobs every day.
If you your needs aren't that expensive, and you don't anticipate growing a ton, then it's probably a smart technical decision to minimize your operational stack. Assuming 10k/jobs a day, thats roughly 7 jobs per minute. Even the most unoptimized database should be able to handle this.
Years of being bullshitted have taught me to instantly distrust anyone who is telling me about how many things they do per day. Jobs or customers per day is something to tell you banker, or investors. For tech people it’s per second, per minute, maybe per hour, or self aggrandizement.
A million requests a day sounds really impressive, but it’s 12req/s which is not a lot. I had a project that needed 100 req/s ages ago. That was considered a reasonably complex problem but not world class, and only because C10k was an open problem. Now you could do that with a single 8xlarge. You don’t even need a cluster.
10k tasks a day is 7 per minute. You could do that with Jenkins.
The best thing about using PostgreSQL for a queue is that you can benefit from transactions: only queue a job if the related data is 100% guaranteed to have been written to the database, in such a way that it's not possible for the queue entry not to be written.
Brandur wrote a great piece about a related pattern here: https://brandur.org/job-drain
He recommends using a transactional "staging" queue in your database which is then written out to your actual queue by a separate process.
Mountpoint for Amazon S3 is optimized for read-heavy workloads that need high throughput. It intentionally does not implement the full POSIX specification for file systems.
This patch adds AVX512 based 64-bit on AVX512-SKX and 16-bit sorting on AVX512-ICL. All the AVX512 sorting code has been reformatted as a separate header files and put in a separate folder. The AVX512 64-bit sorting is nearly 10x faster and AVX512 16-bit sorting is nearly 16x faster when compared to std::sort. Still working on running NumPy benchmarks to get exact benchmark numbers
16-bit int sped up by 17x and float64 by nearly 10x for random arrays. Benchmarked on a 11th Gen Tigerlake i7-1165G7.
有點「有趣」的情況是,AVX-512 在新的 Intel 消費級 CPU 被拔掉了,只有伺服器工作站的 CPU 有保留。而 AMD 的 Zen 4 則是跳下去支援 AVX-512...
That was problematic in cases where :has() contained another jQuery selector extension (e.g. :has(:contains("Item"))) or contained itself (:has(div:has(a))).
The calloc() function shall allocate unused space for an array of nelem elements each of whose size in bytes is elsize. The space shall be initialized to all bits 0.
Instead, it fakes it, using virtual memory: it takes a single 4 KiB page of memory that is already full of zeros (which it keeps around for just this purpose), and maps 1 GiB / 4 KiB = 262144 copy-on-write copies of it into our process's address space. So the first time we actually write to each of those 262144 pages, then at that point the kernel has to go and find a real page of RAM, write zeros to it, and then quickly swap it in place of the "virtual" page that was there before. But this happens lazily, on a page-by-page basis.