Does FreeBSD work better?
Admittedly, the 10G interfaces and fast RAM make up for some of it, but at least for a normal homelab setup, I can't think of an application needing RAM faster than even DDR3, especially at this power level.
A base Mac Mini (256GB/16GB) would cost me €720 while a Minisforum MS-R1 (1TB/32GB) would cost me €559 (minus a 25 euro discount for signing up to their newsletter if you accept that practice).
Price to performance the Apple solution may be better, but the prices aren't similar at all.
Upgrading the Mac to also feature 1TB of storage and 32GB of RAM, the price rises by a whopping €1000 to €1719.
I did not realize the EU versions were that much more expensive.
I do agree about the RAM/storage prices though. It's only worth it if you want the raw power, where the Mac handily beats this.
MinisForum makes disposable hardware. We used to use them for TV computers at work, and while they are cheap, they are fidgity with hardware and drivers, come with hacked-Windows Enterprise installed by default, and generally last for about 2 years before they hit the recycle pile.
Go for the Mac Mini, the hardware incl thermal is also built exceptionally well. That's why you still have 20 year old Mac Minis still running as home servers etc.
Without the ability to upgrade either storage or RAM, a 256GB SSD with 16GB RAM is quite useless for a home server. Minisforum doesn't offer any options with that little RAM and storage it seems (you can pick between barebones and 1TB models).
The bare minimum spec for the Mac Mini sits at an interesting price point, but if you use it for any more than the bare minimum it'll be pretty restrictive with how memory-hungry macOS has become. No Linux support to speak of also makes for a rather mediocre home server experience.
One interesting part I found out of Apple's European pricing is that after currency conversion and subtracting VAT, the European price is still equivalent to $700, which is $100 more than they charge within the US. Looks like a 1/6th price increase is all you need for consumer rights!
Indeed macOS is a bit memory hungry but... unified memory, the sheer speed those chips can move data around is ridiculous. And macOS is a proper workstation Unix.
You're right - it's not ideal for headless. But there are ways. Still less painful than running Windows as as server.
my atom, 4gb,1tb hdd bare metal ovh server also disagrees.
I often see statements like this made as if it's an exceptional characteristic of Macs. I've found that almost all computer hardware I buy has made it 20 years, though. Sure, a hard drive or something dies every once in a while, but most stuff gets retired because I just don't care to use it anymore, not because it doesn't work.
When someone says he drank a few coffees, I would never have guessed it was 32.
Also, I'm surprised how often on here I see people argue about price differences that are literally as I spend on entire computers.
So over the span of 20 years they’ll pay a multitude on these crappy computers than what the Mac mini costs once. May as well get a specced out Mac.
It runs docker (supports docker compose) and vms and has the usual raid stuff.
They also do an arm version for half the price but I wanted the intel gpu for transcoding.
The 10 GBit/s NUCs you find on eBay are enterprise-grade stuff: 10 Gbit/s hasn't really been a consumer thing. A used Fujitsu, Intel or Mellanox dual 10 Gbit/s bought on eBay isn't a "stained shit" that's "not guaranteed to be reliable". It's enterprise grade hardware.
(that said the machine in TFA looks nice)
They'll also probably work out of the box on whatever "server" distro you throw at them, which seems to be an issue with the machine in question.
However, GP was most likely talking about actual NUCs, not NICs. I mean, there wasn't a typo. The point probably being that the CPUs in some of those mini boxen are likely to be woefully undercooled, so performance may not be what you expect by just looking at the CPU model.
An older but better ARM CPU with quadruple Cortex-A78 cores (Armv8.2-A ISA) is available for use in embedded computers from Qualcomm, rebranded from Snapdragon to Dragonwing. There are a few single-board computers of credit-card size with it, which are much faster than Raspberry Pi and the like.
Such SBCs are cheaper than the one from TFA and they are better for the purpose of software development.
The computer described in this article has the advantage of better I/O interfaces, the SoC has much more PCIe lanes, which allows the computer to have more and faster network interfaces.
If you want for an ARM computer to be a true high-throughput network server, then this one is the best choice. Nevertheless, for a true network server, a mini-PC with an Intel or AMD CPU will have a much, much better performance, at the same price or even at a lower price.
Using ARM is justifiable only for the purpose of software development, or if you want a smaller volume and a lower power consumption than achievable by a NUC-sized computer. For these purposes, one of the SBCs with Qualcomm QCM6490 is a better choice.
While a credit-card-sized SBC has only one Ethernet port, you can connect as many Ethernet interfaces as you desire to it (by using an USB hub and USB Ethernet interfaces), as long as the network throughput is not important and you just want to test some server software.
The Minisforum computer from the parent article has only 2 advantages for software development, the Armv9 ISA and being available with more memory, i.e. 32 GB or 64 GB, while the smaller ARM SBCs are available with 8, 12 or 16 GB.
The article never explained why the author wanted an ARM setup. I can only consider this a spiritual thing, just like how the author avoids Debian without providing any concrete explanations.
So in this case, the only valid reason to choose it is to have the ARM ISA for the purpose of software development.
This Chinese CPU is the only Armv9 CPU that is available in anything else than smartphones or expensive computers from Apple, Qualcomm or NVIDIA (or in even more expensive big servers). So there may be cases when it is desirable for software development, even if it has some quirks.
Mobile x86 processors used in mini PCs these days (as in 2026) are very competitive in terms of power efficiency. I wouldn't go for ARM just for that factor alone, especially without side-by-side comparisons of benchmarks.
That's rubbish; even the people who don't care about ISA will care about stuff like power draw and software availability (although ironically arm seems distinctly worse in terms of power draw here).
But, I hope there are other people like me who will take a premium to avoid reading x86 core dumps, which is sort of like getting nails driven through your eyes. Yes, there's more software optimized for the chips; it is still bad code.
"Most people" aren't on HN, either.
The # of ARM servers at cloud providers are growing, but the ARM server options are severely lacking for most.
I, personally, would like to see more ARM growth (and I think we're heading that direction anyway... look at NVIDIA right now). Buying ARM servers that help push ARM software development forward is probably a good thing, IMO, from that POV.
Since this server seems to have pretty average performance/watt and cooling, I can't really see much advantage to ARM here, at least for typical server use cases.
Unless you're doing ARM development, but I feel like a Pi 4/5 is better for basic development.
This computer uses 8 Cortex-A720 cores (and 4 little cores with negligible performance), which have a performance similar to the older Intel E-cores, i.e. Gracemont or Crestmont from Alder Lake, Raptor Lake or Meteor Lake. They are much slower than the recent Intel E-cores, i.e. Skymont or Darkmont, from Arrow Lake or Panther Lake.
So the performance of the whole CPU is similar to the 8-core Intel N300 (Alder Lake N) or Intel N350 (Twin Lake), which are found in various mini-PCs that are cheaper than this ARM computer.
Even so, the performance of this ARM CPU is many times greater than that of a Raspberry Pi and greater than of any cheaper ARM CPU. For greater performance, you must buy a more expensive smartphone, or a Qualcomm or Apple laptop or mini-PC, or a very expensive development computer from NVIDIA.
However I’m not sure of any of the rk3588 vendors that support both UEFI and have a full-size PCIe slot like the MS-R1 has.
8 Cortex-A720 vs. 4 Cortex-A76 means at least 3 times better performance for optimized programs.
Also for I/O throughput, this computer has far more fast PCIe lanes than RK3588, allowing many fast peripherals.
Why is Fedora not considered good for a server?
Whereas Debian/Ubuntu have 5 years and RHEL/Alma/Rocky have 10 years.
I could see the side of maintenance burden being a potential point, meaning that one would be "pushed" to update the system between releases more often than something else.
If you can stay on v12.x for 10 years versus having to upgrade yearly yo maintain support, that’s ideal. 12.x should always behave the same way with your app where-as every major version upgrade may have breaking changes.
Servers don’t need to change, typically. They’re not chasing those quick updates that we expect on desktops.
However, for something like ARM and the use case this particular device may have, in reality you would _want_ (my opinion) to be on a more rolling release distros to pick up the updates that make your system perform better.
I'd take a similar stance for devices that are built in a homelab for running LLMs.
For homelabs, that's out the window. Do whatever you want/fits your needs best. This isn't the place where you'd likely find highly available networks, clustered or highly available services, UPS with battery banks, et. al.
You can't mess up the release cycle because their package repos drop old releases very quickly, so you're left stranded.
A friend recently converted his Fedora servers to RHEL10 because he has kids now and just doesn't have the time for the release cycle. So RHEL, or Debian, Alma, Rocky, offer a lot more stability and less maintenance requirement for people who have a life.
For myself I've had nothing but positive experiences running Fedora on my servers.
For servers at work, I tried running Fedora. The idea was that it would be easier to have small, frequent updates rather than large, infrequent updates. Didn't work. App developers never had enough time to port their stuff to new releases of underpinning software, so we frequently had servers with unsupported OS version. Gave up and switched to RockyLinux. We're in the process of upgrading the Rocky8-based stuff to Rocky9. Rocky9 was released 2022.
Minisforum probably reused the x86 power supply for ARM. The x86 MS-01 and MS-A2 supports GPUs after all.
I'm not a hardware engineer, I've failed miserably in software engineering and now run a VPS host.
Caveat: I'm frequently mistaken, always keen to learn and reduce the error between my perception and reality!
I’m curious how hard hosting VPS as a business was to get off the ground? I’ve worked 5 years previously as a Linux sysadmin, but am getting pretty bored at my current job (administering Cisco VOIP systems). Think I’d rather go back to that
My Beelink Me Mini has an integrated PSU. Actually same with the EQR6 I got too.
Otherwise I'd probably have a few machines from this company.
> I’ve always wanted an ARM server in my homelab. But earlier, I either had to use an underpowered ARM system, or use Asahi...
What is stopping you using Mac with MacOS?
With full disk encryption enabled you need a keyboard and display attached at boot to unlock it. You then need to sign in to your account to start services. You can use an IP based KVM but that’s another thing to manage.
If you use Docker, it runs in a vm instead of native.
With a Linux based ARM box you can use full disk encryption, use drop bear to ssh in on boot to unlock disks, native docker, ability to run proxmox etc.
Mac minis/studio have potential to be great low powered home servers but Apple is not going down that route for consumers. I’d be curious if they are using their own silicon and own server oriented distro internally for some things.
"On a Mac with Apple silicon with macOS 26 or later, FileVault can be unlocked over SSH after a restart if Remote Login is turned on and a network connection is available."
https://support.apple.com/guide/security/managing-filevault-...
The full disk encryption I can live without. I'm assuming these limitations don't apply if it's disabled. [Ah, I just saw the other reply that this has now been fixed]
I was aware of the Docker in a VM issue. I haven't tested this out yet, but my expectation is this can be mitigated via https://github.com/apple/container ?
I appreciate any insights here.
The root of trust for Private Cloud Compute is our compute node: custom-built server hardware that brings the power and security of Apple silicon to the data center, with the same hardware security technologies used in iPhone, including the Secure Enclave and Secure Boot.
https://security.apple.com/blog/private-cloud-compute/Granted, I don't know if it's really server oriented or if they're a bunch of iPhones on cards plugged into existing servers.
On the flip side, an M4 mini is cheaper, faster, much smaller (with built in power supply) and much more efficient. Plus for most applications, they can run in a Linux container just as well.
At that price, why not a mac mini running linux? I think (skimming Asahi docs) the only things that would give you trouble don't matter to the headless usecase here?
> The strange CPU core layout is causing power problems; Radxa and Minisforum both told me Cix is working on power draw, and enabling features like ASPM. It seems like for stability, and to keep memory access working core to core, with the big.medium.little CPU core layout, Cix wants to keep the chip powered up pretty high. 14 to 17 watts idle is beyond even modern Intel and AMD!