It's wild to me that 10gbit isn't the norm by now and tech people who should know better seem to think WiFi matches or even exceeds even 1gbit ethernet. My MBP connects to my WiFi7 setup(Ubiquiti E7) at a nominal 1.5-1.9gbit but Time Machine backups and file transfers are slower than plugging into 1gbit ethernet, probably in large part due to latency and retransmissions. Not to mention that ethernet works with near 100% reliability with dramatically less variation in speed and error rate.
Sorry, this is snarky and off topic, but I'm nostalgic for the days when Time Machine "just worked".
I've been on a lucky streak for several years now, where I haven't gotten that one on any of my devices.
"Preparing backup..." taking an unreasonable amount of time is a regular occurrence, and some edge cases around adjusting TM backup size quotas aren't handled well. But other than that, TM has been working reasonably well for me to back up 10 TB over SMB to a Synology NAS.
My gripe is much more with Apple's abysmal support for SMB and NFS, especially after deprecating AFP. I've been back and forth between them over the years and over several OS versions, and their implementations for both are just terrible.
But over time SMB, for me, proved slightly more stable and performant, with the right tweaks in smb.conf, and authentication and permissions/ownership are easier to deal with than NFS, so I stuck with that.
I also yearn for the days where TM just worked, because somehow, the alternatives are even worse:
- Arq Backup does some things quite well, which is why I use it as part of my 3-2-1. But some of its bugs and implementation decisions just scream "hobby grade" to me.
- Kopia looks interesting, but it's not mature enough yet. Failed for me with absolutely cryptic error messages during repo init both times I tried it, with versions several months apart.
- Restic, Borg / Vorta: Not turnkey enough for me.
TM heavily throttles disk I/O used for backing up in order to ensure that normal user activity isn't affected. That makes it appear that TM is dramatically slower than you would expect which greatly annoys me. This becomes obvious after you run this command which will make both the preparing and transferring phases go closer to the theoretical speed you'd expect:
sudo sysctl debug.lowpri_throttle_enabled=0
That makes sense, and I usually quite like that behavior. I barely ever notice an impact when backups are running.
However, this is happening every time on one machine (Intel iMac), and semi-regularly on another one (M3 MBP), after a fresh restart, giving mds_stores some time to settle down, and the most recent backup just hours ago, with no significant changes on disk since.
In a situation like that, I would expect the "Preparing backup..." stage to just take a second to create an APFS snapshot, and maybe a minute to diff that snapshot against the remote state. But not 10+ minutes.
But thank you for the hint about that sysctl parameter! I will certainly give this a try.
Before I was using one of the common Synology consumer NAS boxes that are often recommended. The NAS didn't report any errors with the drives or its own hardware, but at least once a month TM would glitch on at least one of my home laptops.
My new setup is an Asus FLASHSTOR 12 Pro Gen2 FS6812X. For a year now it's been running without a single apparent TM glitch while backing up multiple personal laptops and my work laptop. Sometimes I'm plugged in and sometimes I'm backing up over WiFi, but it's always worked.
I tried various recommended settings for the Synology and nothing helped so I strongly suspect that the Synology network protocol(SMB, AFP, etc.) implementations were either buggy themselves or at least not compatible with quirks in Apple's implementations. Synology->Asus fixed all my TM problems instantly and seemingly permanantly!
The high expense of 10gig is, in part, because it isn't widely necessary and the people buying it are willing to pay extra.
Over 20 years ago, I was used to having 1g LAN for basic workstations and laptops in an office setting and probably 10-20g uplink from the building (shared by hundreds of staff). I also used 1g at home for my very small LAN between laptop, desktop, and SAN functions. But, my home ISP links were often terrible, such as 128k ADSL or even just a tethered GPRS phone at some points.
You end up with entirely different work styles when you have these different resource constraints.
During the day I need to pull large data files from the work VPN so it's nice that that can happen at full speed even when Steam and movie streaming are also at full throttle. Combine that with backups and moving various files back and forth to my NAS and I'm very happy to have 10Gb local wiring.
Nearly nobody has multigig anything in the home, a probably surprisingly large percentage of business networking is 1gig LAN or less. And most people would not notice the difference if they did.
I am glad it works for you, but everyone else most certainly doesn't need it. (Yet.)
Personally, I do try for mostly gigabit in my home, because I do selfhost, but I have a ~800 Mbps download service (200 Mbps upload, it's asymmetric) that was only 500 Mbps when I signed up. And to be honest most of my patch cables are CAT5e because I'm cheap. I do make sure to run CAT6 through walls though because I don't want to ever have to do it again.
Also, I used to have Astound, and I feel so much sympathy for Google Fiber customers, you have no idea what's coming. If you thought Google had a reputation for bad customer service... just wait!
Not being divorced from reality is the only reason I have not dropped $5K on the new Dream Machine Beast that was just released and have not swapped out my Enterprise 48 PoE (1st gen.) for the newest version that has 12 10G-BaseT ports.
Having 100 TB of storage in your home basement is an even more extreme minority than that. ;)
A gigabit connection is more than enough for a 500-person call center today.
And then hilariously, once you go above a gig, the reality is most sites won't serve them to you any faster than that anyways.
If you want a full FW solution that can actually FW+NAT at 10G bidirectional without breaking a sweat then something like the FortiGate 90G is the cheapest thing I've found that performs really well across the board. Great QoS, great latency, amazing throughput performance (does well with even small packet sizes in a single stream), easy enough to use UI (once you get oriented), low power. If you want to enable all of the NGFW stuff (e.g. AV and IPS) then it'll dip below line rate though.
If you just want something that NATs/connection direction oriented filtering like a "normal" home router then something like the MikroTik CCR2004 can get you better than the performance they got on the VP2440 + give you 12 ports of 10G SFP+ to work with. If you were planning to do "fancy" FWing/functionality beyond a normal home NAT FW (with decent managed switching built in) then the feature set will be a bit limiting, of course.
This is not quite correct.
The primary problem is cross-talk. Copper wire itself will carry the relevant frequencies up to 100m without issue but even with balanced pairs the balancing is not perfect and the "dirty paper precoding" is not perfect so some cross-talk will occur. How long you can go with Cat-5e depends on how well the wire is twisted, how many wires are bundled together, are there any loops or tight bends, and other factors. Cat-6A guarantees less cross-talk with more twists, better balancing, and a plastic separator inside the cable to make the cross-talk more regular and thus easier to cancel out.
Bottom line is: for almost any normal home or apartment any quality Cat-5e cable properly terminated will carry 10GBase-T without issue. In fact if you have problems I would first re-terminate the cable before assuming you need to run new cable. Cat-6 or 6A just isn't necessary.
As a PSA: beware of "CCA". I've noticed Amazon and eBay are absolutely flooded with cheap chinese electrical and networking cable that shows nice shiny copper in the pictures but is actually "copper clad aluminum". If they mention anything at all they code it as "CCA" cable without explaining what that means.
CCA cable cannot, by definition, be ethernet cable. I won't get into the full technical details but the standard was amended to clarify that only pure copper wires are acceptable for ethernet. Personally I would not dare use CCA for anything. It has lower performance, lower current-carrying capability for the same wire diameter (inherent in aluminum), and introduces the risk of oxidation and loosening of connections as people will treat them as copper connections when aluminum needs special installation procedures and connections to avoid them coming loose over time. For electrical connections especially this not only can but absolutely will lead to a fire over time if not treated with the appropriate care. All it takes is a little bit of mechanical action scraping off the thin copper layer and you now have an effectively aluminum wire - a time bomb ticking away.
It is nice moving/streaming large files across the network at 10 gbit. It really is ten times less waiting than with plain old gigabit.
Of course, most of the time I'm working with lots of small files and then the spinning disk array in the NAS has no chance to saturated the this giant pipe, or even a normal gigabit connection...
FWIW, Cat 5e supplanted Cat 5 25 years ago.
Did you check the jackets? I've got Cat5 (no e) marked cable running 10gbaseT. A lot of cable exceeds the specs on its jacket and specified wire provides enough signal to noise for the provided length in dense conduit. When you have shorter runs, without dense wiring, lesser cable can work.
So I bought a reel of that even though I was only going to be using 1000-BaseT. I don't remember there being too much premium on the wire itself.
I have 1.5/900 fibre to my house, and I bring a 2.5 line from the modem to my home office where a 2.5 switch delivers it to my workstation, laptop, and unraid NAS. But those devices are all themselves just gigE I think, and I've yet to come up against a download (even a torrent) that seems like it would have really benefitted from having the entire theoretical 1.5 pipe available.
Home users don’t need more bandwidth to improve their internet experiences, they need lower latency, less congestion and less loss.
https://help.netflix.com/en/node/306
https://learn.microsoft.com/en-us/microsoftteams/prepare-net...
More and more regular people are getting network storage appliances. More and more people have laptops with SSDs that can write at 4 or 5 GB/s. Why shouldn't they get to use all of it?
What’s described in the post is the tech equivalent of supe-ing up a sports car and then driving it in rush hour traffic. It’s fun to geek out doing it, but practically in everyday use the difference will be negligible. Even with large file uploads and downloads, there’s a good chance that services won’t reach those throughputs end to end.
What’s telling is that the post shows screenshots and charts from artificial speed tests. No videos of the Dropbox client chugging away with throttled uploads.
640k should be enough for everybody... DSL should be enough for everybody...
If you build it, they will come.
> I've yet to come up against a download (even a torrent) that seems like it would have really benefitted from having the entire theoretical 1.5 pipe available.
There are many things along the way that would get in the way of a home user downloading something from the internet that would hit that 5GB/s speed. It's not that people should be "banned" from it or something, more that the investment cost isn't worth it.
Yah, our P95 bandwidth is just a few megabits per second. But it's not that expensive and routinely saves me a few minutes here and there.
10gbps on the LAN is more broadly useful. Pegging it for a file share is a daily occurrence.
My gaming time is limited so the faster the better.
high latency, high error rates, and terrifying heat output from SFPs (which the author noted for himself)
the only cat6 left in my home network is the link to verizon's ont, because in their infinite wisdom the ONLY connectivity offered was 10g-base-t
The new 10GBASE-T SFPs are actually not too bad - you can get the full 100 meters at half the wattage it takes for the old space heater generation ones to reach 30 meters. Based on the article, the author did not know there were newer cooler options for the ~the same price.
Meanwhile I'm sat here wishing I could justify running any ethernet in my apartment, but improving wi-fi tech means I never can...
I wonder if you could negotiate down to 1gbit until you see some level of activity, if that would help at all?
I'm still eyeing 10Gb, but if my home needs +30w for three computers, I don't feel like it's really worth it. Would love to see more details on the power consumption from folks, especially tuning for idle.
You can fix the thermal issue either by adding a small fan (Noctua is great) or by adding more radiators: https://pics.ealex.net/share/UxeSf_AWHLIuc-qzK5zl7JIgQvQDAZh...
I've been running it like this in a closed comm box for the last 3 years without any issues. SFP+ modules actually do not use that much power, it's just that it's concentrated into a small package, resulting in high temps.
"The apartment has structured cabling -- each room has one or more RJ45 sockets in the wall," ...
Which is the main problem most folks face.
wish the standard was "conduit" instead of "bake-this-years-tech-into-the-wall" which doesn't always last...
But the simple truth for all those decades is this: When there's already cat-whatever cable in the wall, it generally still works.
Decently-installed conduit (ie, actually-usable conduit) adds a ton of time and expense, which is why it is very seldom used for data circuits in residential structures.
The cable that exists is a lot better than the conduit that doesn't. And copper ethernet is bog-standard like MP3 is: It isn't the best in any technical sense at all, but everything supports it. Universal compatibility is pretty nice.
---
So the ongoing cost of copper 10gbe is electricity. Someone else here in the comments says that a copper 10GBe SFP+ module can use ~3 Watts, or that a newer one can use about 1.5 Watts.
We can be generous by using the larger figure of 3 Watts, and 8 devices..
With 4 ports, eight 10-gig endpoints @ 3 Watts each, and $0.19 per kWh [delivered]: That's $3.28 per month, or about $400 per decade.
If we assume 1.5 Watt endpoints, then that number halves.
If we subtract the power consumption of fiber SFP+ modules (or media converters or whatever) to make the number a relative comparison instead of an absolute, then that figure goes down further.
Not so bad, compared to conduit.
Fiber is much better, but in my case the house already had Cat5 Ethernet wiring (originally used just for phones!) everywhere.
Another use case for 10G Ethernet is PoE for the WiFi access points. Although you can't use SFP+ modules for that, of course.
But I'll keep using a gigabit switch because I have absolutely no idea what I'd use 10G for. It's crazy that gigabit was affordable for me as a student in the early 00s and between then and now we've gone from DVDs to 4K and it's still plenty fast enough. In fact, most people are happy with WiFi (not me, though).
Example of the new gen: https://www.amazon.com/Wiitek-Transceiver-Compatible-UF-RJ45...
Old gen: https://www.amazon.com/10Gtek-SFP-10G-T-S-Compatible-10GBase...
Typically the old gen uses a Marvell AQR113C, and the new gen uses a Broadcom chip that I forget the number of off hand.
Using this module, I was able to get a stable 10 gig over a 75 feet long, 20 year old run of Cat 5e.
The cost, power, and length issues meant that it wasn't exactly well-received by the datacenter market back in 2006(!) when it was first released: DAC was the far more attractive option for a link from server to top-of-rack, and fiber was obviously superior (if not plain required) for anything beyond a hop to the next aisle over.
This left an incredibly tiny market, so obviously beyond the initial investment very little effort was put into developing new products for it. So now the prosumer market is hitting the limit of 1Gbps, 2.5GBASE-T and 5GBASE-T (both based on the techniques of 10GBASE-T, by the way) are becoming the norm, and suddenly network vendors remember that box of ancient 10GBASE-T transceiver chips that has been collecting dust in their warehouse.
Aaand suddenly you've got people buying what they think is a brand-new technology, but which is actually designed and manufactured using technology from a decade and a half earlier, and 10GBASE-T gets a bad name for being "hot" and "power-hungry". Turns out it is actually reasonably well-behaved if you actually make use of modern technology!
I expect we'll be using it for quite a while. 25GBASE-T and 40GBASE-T are even deader: A standard from 2016, which a decade later doesn't have a single available product? Mandatory switching to Cat8 cabling - and only a 30 meter range??? And no forwards-looking compatibility? Yeeaah, no thanks.
I have a MikroTik CRS304-4XG-IN on my office desk with three out of four ports at 10gbit and it is perhaps 20 degrees above ambient on the outside. Warm but not hot. Passively cooled design.
A normal Windows laptop runs hotter than that when idle.