Seems essentially impossible to get:
* pytorch
* GPU acceleration
* VM/container like isolation
The virtio-gpu layer gets closest but seems to only pass through graphics GPU not compute GPU so no pytorch
>podman libkrun
Haven't tried it but research suggests its really shaky still. podman libkrun exposes vulkan while torch expects mps on macs. Sounds like one can force vulkan but that's apparently slow and beta-ish?
But... if you start applications inside your VM it will want the full 8 Gb you've allocated not the 5 Gb it uses at startup?
Edit: I stand corrected!
[1] https://developer.apple.com/documentation/virtualization/vzv...
Browse /r/macos if you dare to wade into the uninformed cesspool; it's full of OOTB apps causing OOMs (among 3rd party apps) with the past at least two major versions of macOS.
I admit I don't have much experience with how Windows handles constrained memory since XP, and XP was abysmal at it just by virtue of being far more bloated than an equivalent Linux distro. It's certainly far more bloated nowadays, but maybe it handles memory pressure better.
None of this should be construed to say that macOS doesn't have serious issues or that it's not in dire need of a Snow Leopard-esque "0 new features" release. That's tangential to its memory handling, where I haven't seen the issues you describe.
As for macOS...
https://old.reddit.com/r/MacOS/comments/1njf1aj/bravo_apple_...
https://old.reddit.com/r/MacOS/comments/1nxh08n/impressive_m...
https://old.reddit.com/r/MacOS/comments/1jo5pnq/passwords_ap...
https://old.reddit.com/r/MacOS/comments/1gkwxe4/how_is_memor...
https://old.reddit.com/r/MacOS/comments/1seq0ij/freeform_has...
There are _plenty_ more. There is some fundamental library leaking given the range of impacted apps.
200 lines?! That’s two orders of magnitude too many. What exactly are you doing that you need so such code for signing and notarisation?
Below content from https://eclecticlight.co/2020/11/28/startup-modes-for-m1-mac...
Launch 1 True Recovery, open Terminal, then run “bputil -a” (without the quotes) to downgrade system security and allow for more boot arguments. You might need to restart after this step.
Then, run [nvram boot-args=”-s”] (without the square brackets). Restart to launch Single User Mode.
Once in Single User Mode, run these commands (in the following order) to mount the root volume group:
1. mount -P 1
2. /usr/libexec/init_data_protection
3. mount -P 2
Future restarts will always launch Single User Mode first. To stop launching Single User Mode, run [nvram boot-args=“”] (without the square brackets).
To restore your system to full security, run “bputil -f” (without the quotes). If you choose to run that command in macOS, prefix “sudo” to the beginning.
"What you're referring to as Darwin, is in fact, Darwin/XNU."
"What you're referring to as XNU, is in fact, BSD/Mach."
I seem to remember it being possible to run macOS-less Darwin several years ago, not sure if that's still possible or if Apple has modified it so much at this point that it's useless without at least some macOS components.
2024, maybe? needs some renewed interest perhaps:
Apple stopped updating this 5 years ago.
I remember getting it to boot once long ago but I didn't have anything to actually do with it.
https://browser.geekbench.com/v6/cpu/compare/17022784?baseli...
Good reminder that there's a certain amount of memory tied up with each core (probably mainly page cache and concurrency handling etc).
Similarly if you started with 4GB and there was 900MB available for user apps, I expect you could launch apps that consume 1500MB just fine; the OS is leaving enough to launch anything, and making use of unused memory for cache/etc.
Besides the fact that the operating system may allocate some memory for each thread, when you launch a multi-threaded application that is able to use all available threads, for instance the compilation of a big software project, it frequently will allocate some working memory in an amount proportional with the amount of working threads.
I have encountered many multi-threaded applications that need up to 2 GB per thread to work well.
This corresponds to having 64 GB for a desktop CPU with 32 threads, like Ryzen 9 9950X.
For the compilation example, I have seen software projects, like Chrome/Chromium and its derivatives, where if you do not have enough memory, proportional to the number of hardware threads, e.g. when you have only 32 GB for a 16 core/32 thread CPU, you must reduce the number of concurrent compilations, e.g. with an appropriate parameter to "make -j", leaving some threads and cores idle, because otherwise you may encounter out-of-memory errors.
All good things imo, it dynamically makes the most of what is available, at the expense of making it harder to see a true baseline of hard min requirement to operate.
Fun things to check, `vm_stat`
$ vm_stat Mach Virtual Memory Statistics: (page size of 4096 bytes)
Pages free: 230295.
Pages active: 1206857.
Pages inactive: 1206361.
Pages speculative: 31863.
Pages throttled: 0.
Pages wired down: 470093.
Pages purgeable: 18894.
"Translation faults": 21635255.
Pages copy-on-write: 1590349.
Pages zero filled: 11093310.
Pages reactivated: 15580.
Pages purged: 50928.
File-backed pages: 689378.
Anonymous pages: 1755703.
Pages stored in compressor: 0.
Pages occupied by compressor: 0.
Decompressions: 0.
Compressions: 0.
Pageins: 832529.
Pageouts: 225.
Swapins: 0.
Swapouts: 0.
edit: no code fence markdown support or am I doing something wrong?