Mac M1 vs M2 vs M3 vs M4 for Running LLMs - Real Tests - ML Journey
detailed benchmarks and info wrt apple silicon cpus with llama.
Public notes from activescott tagged with #benchmarks
detailed benchmarks and info wrt apple silicon cpus with llama.
This leaderboard compares 100+ text and image embedding models across 1000+ languages. We refer to the publication of each selectable benchmark for details on metrics, languages, tasks, and task types. Anyone is welcome to add a model, add benchmarks, help us improve zero-shot annotations or propose other changes to the leaderboard.
great pu benchmarking suite and list of benchmarks on lots of gpus. predates RTX 50 series and not updated in 2yrs. contains apple silicon too.
The Ultimate Express. Fastest http server with full Express compatibility, based on µWebSockets.
This library is a very fast re-implementation of Express.js 4. It is designed to be a drop-in replacement for Express.js, with the same API and functionality, while being much faster. It is not a fork of Express.js. To make sure µExpress matches behavior of Express in all cases, we run all tests with Express first, and then with µExpress and compare results to make sure they match.
npm install ultimate-express -> replace express with ultimate-express -> done
µWebSockets.js is a standards compliant web server written in 10,000 lines of C++. It is exposed to Node.js as a simple-to-use, native V8 addon and performs at least 10x that of Socket.IO, 8.5x that of Fastify. It makes up the core components of Bun and is the fastest standards compliant web server in the TechEmpower (not endorsed) benchmarks.
Tropical storms pummeled the land and ravaged ecosystems. Floodwaters engulfed streets and left cars stuck in mud. And fires scorched trees and consumed houses.
MLPerf Client is a benchmark developed collaboratively at MLCommons to evaluate the performance of large language models (LLMs) and other AI workloads on personal computers–from laptops and desktops to workstations. By simulating real-world AI tasks it provides clear metrics for understanding how well systems handle generative AI workloads. The MLPerf Client working group intends for this benchmark to drive innovation and foster competition, ensuring that PCs can meet the challenges of the AI-powered future.