How fast is a macOS VM, and how small could it be?
113 points by moosia 5 hours ago | 37 comments

fouc 4 hours ago
>Starting with 4 virtual cores and 8 GB vRAM, where the VM ran perfectly briskly with around 5 GB of memory used, I stepped down to 3 cores and 6 GB, to discover that memory usage fell to 3.9 GB and everything worked well. With just 2 cores and 4 GB of memory only 3.1 GB of that was used, and the VM continued to handle those lightweight tasks normally.

Good reminder that there's a certain amount of memory tied up with each core (probably mainly page cache and concurrency handling etc).

reply
fulafel 58 minutes ago
I'd bet for the null hypothesis: the memory behaviour changes would hold if the core count was kept constant and only the VM's memory size was adjusted.
reply
brookst 54 minutes ago
Agreed. This is the OS adapting to available memory.

Similarly if you started with 4GB and there was 900MB available for user apps, I expect you could launch apps that consume 1500MB just fine; the OS is leaving enough to launch anything, and making use of unused memory for cache/etc.

reply
adrian_b 37 minutes ago
As a general rule, also the amount of physical memory installed in a computer should be proportional with the number of hardware threads provided by its CPU.

Besides the fact that the operating system may allocate some memory for each thread, when you launch a multi-threaded application that is able to use all available threads, for instance the compilation of a big software project, it frequently will allocate some working memory in an amount proportional with the amount of working threads.

I have encountered many multi-threaded applications that need up to 2 GB per thread to work well.

This corresponds to having 64 GB for a desktop CPU with 32 threads, like Ryzen 9 9950X.

For the compilation example, I have seen software projects, like Chrome/Chromium and its derivatives, where if you do not have enough memory, proportional to the number of hardware threads, e.g. when you have only 32 GB for a 16 core/32 thread CPU, you must reduce the number of concurrent compilations, e.g. with an appropriate parameter to "make -j", leaving some threads and cores idle, because otherwise you may encounter out-of-memory errors.

reply
wutwutwat 48 minutes ago
There is some overhead per-core, you're right, but imo this reduction in usage is likely from how the kernel allocates available memory, which is being reduced as well. The kernel will keep read caches around longer with more memory, it'll prefer to compress memory instead of swap to disk if it has more, it'll purge/cleanup reclaimable memory less often with more memory, etc. It even scales its internal buffer sizes and vnode tables depending on total memory.

All good things imo, it dynamically makes the most of what is available, at the expense of making it harder to see a true baseline of hard min requirement to operate.

Fun things to check, `vm_stat`

$ vm_stat Mach Virtual Memory Statistics: (page size of 4096 bytes)

Pages free: 230295.

Pages active: 1206857.

Pages inactive: 1206361.

Pages speculative: 31863.

Pages throttled: 0.

Pages wired down: 470093.

Pages purgeable: 18894.

"Translation faults": 21635255.

Pages copy-on-write: 1590349.

Pages zero filled: 11093310.

Pages reactivated: 15580.

Pages purged: 50928.

File-backed pages: 689378.

Anonymous pages: 1755703.

Pages stored in compressor: 0.

Pages occupied by compressor: 0.

Decompressions: 0.

Compressions: 0.

Pageins: 832529.

Pageouts: 225.

Swapins: 0.

Swapouts: 0.

edit: no code fence markdown support or am I doing something wrong?

reply
Havoc 3 hours ago
Got a M5 air recently - my first dive into MacOS land so trying to figure this out too.

Seems essentially impossible to get:

* pytorch

* GPU acceleration

* VM/container like isolation

The virtio-gpu layer gets closest but seems to only pass through graphics GPU not compute GPU so no pytorch

reply
emmelaich 59 minutes ago
I got torch to run in a Cirruslabs Tart instance.
reply
plufz 2 hours ago
I need this too, and looked quite a lot on it a year ago. I haven’t had time to check out the recent developments with Docker Model Runner (vllm-metal) or podman libkrun. Did neither of those work for you?
reply
Havoc 60 minutes ago
vllm-metal isn't GPU access but rather a openai compatible end point which I can already do via lm studio endpoint over network

>podman libkrun

Haven't tried it but research suggests its really shaky still. podman libkrun exposes vulkan while torch expects mps on macs. Sounds like one can force vulkan but that's apparently slow and beta-ish?

reply
nasretdinov 4 hours ago
Honestly macOS probably can go much lower than that if you turn off some stuff that's not strictly necessary for a VM. The first iPhones only had 128 MiB of RAM and they ran a trimmed down version of macOS Tiger I believe. It's just that RAM has been quite abundant so far, so there was no real reason to try to trim it down, but it's definitely possible, and probably not that hard either, we just need to start trying again :)
reply
dhruv3006 3 hours ago
reply
nottorp 5 hours ago
> Starting with 4 virtual cores and 8 GB vRAM, where the VM ran perfectly briskly with around 5 GB of memory used

But... if you start applications inside your VM it will want the full 8 Gb you've allocated not the 5 Gb it uses at startup?

reply
stingraycharles 4 hours ago
I don’t assume that macOS virtualization is advanced enough to support memory ballooning, or is that not what you’re referring to?

Edit: I stand corrected!

reply
pyth0 4 hours ago
I don't assume anything either, but a single Google search is enough to dispel that [1]

[1] https://developer.apple.com/documentation/virtualization/vzv...

reply
sgt 4 hours ago
macOS is generally pretty amazing at efficient memory usage and VM (virtual memory subsystem) handling. So even a 8GB machine can run pretty impressive workloads without having the user think the machine is underpowered.
reply
p_ing 3 hours ago
Not really. Larger page sizes mean more potential for wasted memory and it has had a long standing memory leak in some core component to where even Calculator can cause an OOM event.
reply
jdiff 2 hours ago
GP is pretty accurate in my experience. Up until last year I was still running an Intel MacBook Pro with 8GB of RAM and successfully multitasked with Blender, Illustrator, Unity, VS Code, and Firefox quite often. The math doesn't make sense, but all stayed responsive even with frequent hops between them. The only OOM events I ran into were memory leaks from Firefox, I believe from an extension.
reply
p_ing 60 minutes ago
There's nothing particularly interesting about that. Linux distro-of-your-choice can run the equivalents fine, as can Windows.

Browse /r/macos if you dare to wade into the uninformed cesspool; it's full of OOTB apps causing OOMs (among 3rd party apps) with the past at least two major versions of macOS.

reply
jdiff 10 minutes ago
I think there is something interesting there. I was running lighter workloads on similar RAM when I daily drove Debian and was frequently brought to my knees by swapping to death. I had to make conscious choices and manage my RAM usage to avoid it, and still occasionally got T-boned by something I overlooked. I have never had to worry about that with macOS.

I admit I don't have much experience with how Windows handles constrained memory since XP, and XP was abysmal at it just by virtue of being far more bloated than an equivalent Linux distro. It's certainly far more bloated nowadays, but maybe it handles memory pressure better.

None of this should be construed to say that macOS doesn't have serious issues or that it's not in dire need of a Snow Leopard-esque "0 new features" release. That's tangential to its memory handling, where I haven't seen the issues you describe.

reply
p_ing 5 minutes ago
Even NT4 handles memory pressure than modern day Linux. It's just not a fair comparison; Linux has never dealt with userspace OOM well.

As for macOS...

https://old.reddit.com/r/MacOS/comments/1njf1aj/bravo_apple_...

https://old.reddit.com/r/MacOS/comments/1nxh08n/impressive_m...

https://old.reddit.com/r/MacOS/comments/1jo5pnq/passwords_ap...

https://old.reddit.com/r/MacOS/comments/1gkwxe4/how_is_memor...

https://old.reddit.com/r/MacOS/comments/1seq0ij/freeform_has...

There are _plenty_ more. There is some fundamental library leaking given the range of impacted apps.

reply
nottorp 4 hours ago
What will that help with if the host and guest combined need > physical ram?
reply
jdub 3 hours ago
If guest memory can be reclaimed, it doesn't need to be paged to disk once you hit RAM contention. It's mostly saving accounting overhead, but it'll have some effect on latency, which you're more likely to perceive under contention.
reply
mgaunard 3 hours ago
My only experience with VMs on macOS is colima+docker, and it's relatively painful and inefficient (but usable).
reply
woadwarrior01 42 minutes ago
Try Apple's container CLI. I moved a project of mine from colima+docker to it relatively easily, a couple of weekends ago.

https://github.com/apple/container

reply
embedding-shape 3 hours ago
Recently got a Mac Mini for local CI purposes (together with Forgejo Actions), took a broad look at the ecosystem and decided to just roll with "build on host" instead. Setting up signing/notarization just looked like an insurmountably task together with isolating it from the host, even with agents. At least the macOS builds are really fast now and the signing/notarization just ~200 lines of Bash...
reply
latexr 3 hours ago
> the signing/notarization just ~200 lines of Bash

200 lines?! That’s two orders of magnitude too many. What exactly are you doing that you need so such code for signing and notarisation?

reply
embedding-shape 16 minutes ago
From the top of my head, unlocking the keychain, finding the right identity, notarizing two parts, the binary itself and the .dmg that the .app ships in and some other stuff I'm sure. Can do a deeper look in a bit when I can. Most of the hassle is because it's 100% unattended and I had to do stuff to avoid GUI-prompts for passwords/unlocks, and that the Forgejo Runner has a different security context.
reply
yohannparis 3 hours ago
Could you share your recipe please ? I’m interested
reply
collabs 2 hours ago
I was hoping to see the bare macOS with all the applications removed as much as possible, no graphical user interface, just the bare minimum to boot, login as a user, and write hello world dot txt with a text editor. Or maybe some command line apps? Or is it no longer macOS at that point?
reply
jitl 21 minutes ago
You can boot regular macOS directly to a root terminal in “Single User Mode”. This was easier on Intel macs of yore but is also possible on M1+

Below content from https://eclecticlight.co/2020/11/28/startup-modes-for-m1-mac...

Launch 1 True Recovery, open Terminal, then run “bputil -a” (without the quotes) to downgrade system security and allow for more boot arguments. You might need to restart after this step.

Then, run [nvram boot-args=”-s”] (without the square brackets). Restart to launch Single User Mode.

Once in Single User Mode, run these commands (in the following order) to mount the root volume group:

1. mount -P 1

2. /usr/libexec/init_data_protection

3. mount -P 2

Future restarts will always launch Single User Mode first. To stop launching Single User Mode, run [nvram boot-args=“”] (without the square brackets).

To restore your system to full security, run “bputil -f” (without the quotes). If you choose to run that command in macOS, prefix “sudo” to the beginning.

reply
hmry 2 hours ago
"I'd just like to interject for a moment. What you're referring to as macOS, is in fact, macOS/Darwin, or as I've recently taken to calling it, macOS plus Darwin."

"What you're referring to as Darwin, is in fact, Darwin/XNU."

"What you're referring to as XNU, is in fact, BSD/Mach."

I seem to remember it being possible to run macOS-less Darwin several years ago, not sure if that's still possible or if Apple has modified it so much at this point that it's useless without at least some macOS components.

reply
Terretta 45 minutes ago
> several years ago

2024, maybe? needs some renewed interest perhaps:

https://www.puredarwin.org/

reply
chuckadams 30 minutes ago
Needs someone to pick it up: its project leader passed away last year.
reply
colechristensen 29 minutes ago
https://github.com/apple/darwin-xnu

Apple stopped updating this 5 years ago.

I remember getting it to boot once long ago but I didn't have anything to actually do with it.

reply
doubled112 11 minutes ago
Looks like it is still getting updates and has moved here: https://github.com/apple-oss-distributions/xnu
reply
dieulot 5 hours ago
I'm wondering if the Xcode simulator (without Xcode running) performs as well, my 2020 Intel MacBook Air has been incapable of running Safari in iOS smoothly for nearly all its life.
reply
jitl 6 minutes ago
Macbook Neo should run rings around any Intel Air: Geekbench shows it at 250% the score of 2020 Intel Air.

https://browser.geekbench.com/v6/cpu/compare/17022784?baseli...

reply
aykutseker 21 minutes ago
[dead]
reply
shawryadev 2 hours ago
[dead]
reply
vk6flab 3 hours ago
[dead]
reply