10Gb/s Ethernet: what I did to get it working in my home
99 points by gpjt 2 days ago | 74 comments

xxpor 3 hours ago
Unfortunately the blog didn't link to the SFP+ module they're using, but everyone should know there's effectively 2 different generations of 10gbit sfp+ to ethernet^H10BASE-T modules. The old gen, labeled as 30 meters, draws ~3 W, and gets extremely hot (to the point it'll usually cause link flaps), and the newer gen, usually labeled as 100m or 80m, draws ~1.5 W, and runs much, much cooler.

Example of the new gen: https://www.amazon.com/Wiitek-Transceiver-Compatible-UF-RJ45...

Old gen: https://www.amazon.com/10Gtek-SFP-10G-T-S-Compatible-10GBase...

Typically the old gen uses a Marvell AQR113C, and the new gen uses a Broadcom chip that I forget the number of off hand.

reply
jdprgm 12 minutes ago
If you want to buy the cheaper old ones and are concerned about heat just add a usb fan. I have the same mikrotik switch in the post and 2 sfp to rj45 + this fan https://www.amazon.com/dp/B00G059G86?th=1 sitting on top and it makes a dramatic difference in temp.
reply
perarneng 2 hours ago
I had this issue with old gen Unifi SFP+ to RJ45 10Gbe, 3 failed. Needed gloves to remove them. Bought newer gen and they are warm but i dont need gloves.
reply
rayiner 54 minutes ago
BCM84891L. I like these modules (select 80 or 100 m in the drop down): https://www.luleey.com/product/10gbase-t-sfp-to-rj45-copper-...

Using this module, I was able to get a stable 10 gig over a 75 feet long, 20 year old run of Cat 5e.

reply
CSSer 3 hours ago
Wow, and at essentially the same price!
reply
xxpor 2 hours ago
Yeah, the new ones have gotten much cheaper it seems. About a year ago they were ~2x.
reply
thefz 2 hours ago
Thanks! 10Gb Eth is insane for exactly this reason (optical SFP+ modules are way cheaper and more reliable)
reply
crote 10 minutes ago
The main issue is that it is ancient.

The cost, power, and length issues meant that it wasn't exactly well-received by the datacenter market back in 2006(!) when it was first released: DAC was the far more attractive option for a link from server to top-of-rack, and fiber was obviously superior (if not plain required) for anything beyond a hop to the next aisle over.

This left an incredibly tiny market, so obviously beyond the initial investment very little effort was put into developing new products for it. So now the prosumer market is hitting the limit of 1Gbps, 2.5GBASE-T and 5GBASE-T (both based on the techniques of 10GBASE-T, by the way) are becoming the norm, and suddenly network vendors remember that box of ancient 10GBASE-T transceiver chips that has been collecting dust in their warehouse.

Aaand suddenly you've got people buying what they think is a brand-new technology, but which is actually designed and manufactured using technology from a decade and a half earlier, and 10GBASE-T gets a bad name for being "hot" and "power-hungry". Turns out it is actually reasonably well-behaved if you actually make use of modern technology!

I expect we'll be using it for quite a while. 25GBASE-T and 40GBASE-T are even deader: A standard from 2016, which a decade later doesn't have a single available product? Mandatory switching to Cat8 cabling - and only a 30 meter range??? And no forwards-looking compatibility? Yeeaah, no thanks.

reply
apelapan 2 hours ago
I don't agree that it is insane. It is less efficient than ideal, but 10gbit over copper is not necessarily dangerously hot or difficult to power.

I have a MikroTik CRS304-4XG-IN on my office desk with three out of four ports at 10gbit and it is perhaps 20 degrees above ambient on the outside. Warm but not hot. Passively cooled design.

A normal Windows laptop runs hotter than that when idle.

reply
oakwhiz 2 hours ago
Minor nitpick but they are both considered Ethernet. The 4 pair copper one is 10GBASE-T.
reply
rayiner 53 minutes ago
Yeah but home users already often have 75-100 foot runs of cat 5e in the walls, and those work fine for 10G-baseT.
reply
TexanFeller 4 hours ago
I'm extremely happy after upgrading my network to 10gbit copper ethernet. It was much more expensive than I thought it should be, but worth it even if I only max it occasionally. Now I can easily fully saturate my 10gbit ethernet doing a first Time Machine backup or transferring files to my M.2 SSD NAS which saves me waiting rime and is satisfying to watch.

It's wild to me that 10gbit isn't the norm by now and tech people who should know better seem to think WiFi matches or even exceeds even 1gbit ethernet. My MBP connects to my WiFi7 setup(Ubiquiti E7) at a nominal 1.5-1.9gbit but Time Machine backups and file transfers are slower than plugging into 1gbit ethernet, probably in large part due to latency and retransmissions. Not to mention that ethernet works with near 100% reliability with dramatically less variation in speed and error rate.

reply
floathub 2 hours ago
It's wild to me Time Machine works on your network. Are you just doing "first backups" over and over again, or have you somehow achieved the very rare state where Time Machine can run for, say, a week at a time without falling over?

Sorry, this is snarky and off topic, but I'm nostalgic for the days when Time Machine "just worked".

reply
lukasgraf 2 hours ago
I can't remember the exact phrasing, but are you talking about the error message that essentially says: "The Tardis is broken. Your backup has diverged into an entirely separate timeline, and I have no way of reconciling it. You may now sacrifice an entire weekend to do an initial backup again."?

I've been on a lucky streak for several years now, where I haven't gotten that one on any of my devices.

"Preparing backup..." taking an unreasonable amount of time is a regular occurrence, and some edge cases around adjusting TM backup size quotas aren't handled well. But other than that, TM has been working reasonably well for me to back up 10 TB over SMB to a Synology NAS.

My gripe is much more with Apple's abysmal support for SMB and NFS, especially after deprecating AFP. I've been back and forth between them over the years and over several OS versions, and their implementations for both are just terrible.

But over time SMB, for me, proved slightly more stable and performant, with the right tweaks in smb.conf, and authentication and permissions/ownership are easier to deal with than NFS, so I stuck with that.

I also yearn for the days where TM just worked, because somehow, the alternatives are even worse:

- Arq Backup does some things quite well, which is why I use it as part of my 3-2-1. But some of its bugs and implementation decisions just scream "hobby grade" to me.

- Kopia looks interesting, but it's not mature enough yet. Failed for me with absolutely cryptic error messages during repo init both times I tried it, with versions several months apart.

- Restic, Borg / Vorta: Not turnkey enough for me.

reply
TexanFeller 43 minutes ago
> "Preparing backup..." taking an unreasonable amount of time is a regular occurrence,

TM heavily throttles disk I/O used for backing up in order to ensure that normal user activity isn't affected. That makes it appear that TM is dramatically slower than you would expect which greatly annoys me. This becomes obvious after you run this command which will make both the preparing and transferring phases go closer to the theoretical speed you'd expect:

sudo sysctl debug.lowpri_throttle_enabled=0

reply
lukasgraf 26 minutes ago
> TM heavily throttles disk I/O used for backing up

That makes sense, and I usually quite like that behavior. I barely ever notice an impact when backups are running.

However, this is happening every time on one machine (Intel iMac), and semi-regularly on another one (M3 MBP), after a fresh restart, giving mds_stores some time to settle down, and the most recent backup just hours ago, with no significant changes on disk since.

In a situation like that, I would expect the "Preparing backup..." stage to just take a second to create an APFS snapshot, and maybe a minute to diff that snapshot against the remote state. But not 10+ minutes.

But thank you for the hint about that sysctl parameter! I will certainly give this a try.

reply
TexanFeller 2 hours ago
For a very long time I thought Time Machine had become flaky, and I'm sure it's partially to blame, but with my current setup I've literally never observed it corrupt a backup and have to start over.

Before I was using one of the common Synology consumer NAS boxes that are often recommended. The NAS didn't report any errors with the drives or its own hardware, but at least once a month TM would glitch on at least one of my home laptops.

My new setup is an Asus FLASHSTOR 12 Pro Gen2 FS6812X. For a year now it's been running without a single apparent TM glitch while backing up multiple personal laptops and my work laptop. Sometimes I'm plugged in and sometimes I'm backing up over WiFi, but it's always worked.

I tried various recommended settings for the Synology and nothing helped so I strongly suspect that the Synology network protocol(SMB, AFP, etc.) implementations were either buggy themselves or at least not compatible with quirks in Apple's implementations. Synology->Asus fixed all my TM problems instantly and seemingly permanantly!

reply
bombcar 2 hours ago
Time Machine to a network share via Samba has been pretty reliable for me - only once has it corrupted itself in the five+ years I’ve been using it.

Amusingly enough Time Machine to a local drive failed completely.

reply
ocdtrekkie 2 hours ago
Heh it's honestly wild to me anyone needs over a gig. My work has a one gig fiber line supporting hundreds of employees and usage generally remains below 10%.

The high expense of 10gig is, in part, because it isn't widely necessary and the people buying it are willing to pay extra.

reply
jdprgm 8 minutes ago
I have it more for the fast nas access and being able to treat nas disks as more or less the same performance as if they were directly sata in my machine. Significantly less so about the external network aspect.
reply
saltcured 20 minutes ago
Depends a lot on your work type and your prior exposure. If you only work "locally" and upload/download rarely, you may be way less demanding of your network than if you actually do distributed work with remote storage, high-bandwidth communicating tasks, etc.

Over 20 years ago, I was used to having 1g LAN for basic workstations and laptops in an office setting and probably 10-20g uplink from the building (shared by hundreds of staff). I also used 1g at home for my very small LAN between laptop, desktop, and SAN functions. But, my home ISP links were often terrible, such as 128k ADSL or even just a tethered GPRS phone at some points.

You end up with entirely different work styles when you have these different resource constraints.

reply
TexanFeller 58 minutes ago
1Gb Internet service seems low these days, much less 1Gb LAN. I have 3Gb Google Fiber service and actually get 2+ for individual downloads from some internet services like Steam. Even at 2Gb it's annoying to wait tens of minutes for 100+ GiB games to download. If I go on vacation I come home with 10s of GiBs of photos and videos on multiple devices that start syncing with cloud storage.

During the day I need to pull large data files from the work VPN so it's nice that that can happen at full speed even when Steam and movie streaming are also at full throttle. Combine that with backups and moving various files back and forth to my NAS and I'm very happy to have 10Gb local wiring.

reply
ocdtrekkie 46 minutes ago
This is one of those things where I just have to express that a lot of the HN crowd is entirely divorced from the reality the rest of the world experiences. ;)

Nearly nobody has multigig anything in the home, a probably surprisingly large percentage of business networking is 1gig LAN or less. And most people would not notice the difference if they did.

I am glad it works for you, but everyone else most certainly doesn't need it. (Yet.)

Personally, I do try for mostly gigabit in my home, because I do selfhost, but I have a ~800 Mbps download service (200 Mbps upload, it's asymmetric) that was only 500 Mbps when I signed up. And to be honest most of my patch cables are CAT5e because I'm cheap. I do make sure to run CAT6 through walls though because I don't want to ever have to do it again.

Also, I used to have Astound, and I feel so much sympathy for Google Fiber customers, you have no idea what's coming. If you thought Google had a reputation for bad customer service... just wait!

reply
myrandomcomment 12 minutes ago
I disagree. I pay $120 a month for 5Gbps symmetric connection. I could upgrade that to 10Gbps for 2x, but there is no reason at this point. Even the local max from the cable company is more than 1Gbps, 1.2Gbps down / ~300Mbps up for around $80. Everything is streaming now. I work from home, on video calls. My better half will be watching something the on AppleTV streaming, the kid will be doing the same. I have Backblaze running to do backups to the cloud. 3 different laptops that will run TimeMachine backups to the NAS. The AppleTVs also have the Infuse app on them to stream local video files from the NAS. The security cameras are a constant 60Mpbs 24/7/365 to the NVR. The laptops can push a gig wireless and 2.5Gpbs when plugged into the Thunderbolt docs. It is not clear that I need 10Gbps everywhere, but it has its uses. The NAS is at 10G. The link from the main switch to the router is 10G. The 3 APs in the house are at 2.5G and the 2 outside are at 1G. There was a noticeable difference when right sizing the shared links paths up from 1G. When I say noticeable it both perceived and measured. I used to do switch bring up and competitive testing so I have a pretty good idea how this all comes together. Given that a reasonably cheap set of APs can now handle clients at above 1G and internet speeds in some areas being above 1G, moving to at least 2.5G in places is useful and not divorced from reality. I am in tech, but I have help my not tech friends upgrade APs, et.al. for their normal everyday home use cases and they have all been quite happy with the change.

Not being divorced from reality is the only reason I have not dropped $5K on the new Dream Machine Beast that was just released and have not swapped out my Enterprise 48 PoE (1st gen.) for the newest version that has 12 10G-BaseT ports.

reply
godzillabrennus 2 hours ago
I put 5Gbit internet into my home (fiber) to build my startup. I'm processing terabytes of data. I have over 100TB of storage in my basement. I can regularly saturate my internet connection. That said, I remember well when a 1Gbit connection provided enough bandwidth for a 500-person call center for daily workloads (back about 10 years ago).
reply
chromadon 60 minutes ago
Off topic (sorry). Interested to know what your startup is?
reply
ocdtrekkie 2 hours ago
That puts you in an extreme minority, even amongst enterprise businesses. Many medium sized enterprises have storage that looks like "a couple dozen TB total" for hundreds of staff.

Having 100 TB of storage in your home basement is an even more extreme minority than that. ;)

A gigabit connection is more than enough for a 500-person call center today.

reply
bombcar 2 hours ago
Much works on 10/100 if you wanna know the truth about it - but it is really nice to hit full speeds when copying terabytes around.
reply
ocdtrekkie 2 hours ago
Agreed. We had IP phones with 100 Mbps switches in between most of our computers and the rest of the network for a long time and very few people noticed. It'd only really be when I was installing a system upgrade or something, and I'd be like "man, it'd be nice if this didn't take an extra two minutes". For normal web access, 100 Mbps and 1000 Gbps aren't really discernable, until you're downloading large files. A lot of 4K streaming videos though, you'll start to feel it quite a bit faster.

And then hilariously, once you go above a gig, the reality is most sites won't serve them to you any faster than that anyways.

reply
bombcar 2 hours ago
I found the nicest thing about fiber is I can hit over a gb/s uploading, which is often much more critical-path for whatever I’m doing than a download.
reply
godzillabrennus 2 hours ago
[dead]
reply
vanillanuttaps 2 hours ago
[dead]
reply
zamadatix 2 hours ago
The only thing I'd caution anyone else looking to do the same is doing a software router/fw like the Portectli is it's usually not hard to get the raw bandwidth to look nice with big flows but the new connection latency, connections per second, jitter, and QoS handling tend to suffer vs something with hw offloads (which is what most are used to even with cheapo gigabit home AP+router+switch combos). It's also not usually the cheapest way to get the 10G class NAT/L4 FW bandwidth, but it is usually the cheapest way to get "full" FW functionality if you don't care as much about the performance.

If you want a full FW solution that can actually FW+NAT at 10G bidirectional without breaking a sweat then something like the FortiGate 90G is the cheapest thing I've found that performs really well across the board. Great QoS, great latency, amazing throughput performance (does well with even small packet sizes in a single stream), easy enough to use UI (once you get oriented), low power. If you want to enable all of the NGFW stuff (e.g. AV and IPS) then it'll dip below line rate though.

If you just want something that NATs/connection direction oriented filtering like a "normal" home router then something like the MikroTik CCR2004 can get you better than the performance they got on the VP2440 + give you 12 ports of 10G SFP+ to work with. If you were planning to do "fancy" FWing/functionality beyond a normal home NAT FW (with decent managed switching built in) then the feature set will be a bit limiting, of course.

reply
xenadu02 34 minutes ago
> The most important question was the structured cabling in the walls; was it CAT-5E or CAT-6, or even CAT-6A? Remember from the last post, 10GBASE-T might work over short runs of -5E (even though officially it's not meant to be able to).

This is not quite correct.

The primary problem is cross-talk. Copper wire itself will carry the relevant frequencies up to 100m without issue but even with balanced pairs the balancing is not perfect and the "dirty paper precoding" is not perfect so some cross-talk will occur. How long you can go with Cat-5e depends on how well the wire is twisted, how many wires are bundled together, are there any loops or tight bends, and other factors. Cat-6A guarantees less cross-talk with more twists, better balancing, and a plastic separator inside the cable to make the cross-talk more regular and thus easier to cancel out.

Bottom line is: for almost any normal home or apartment any quality Cat-5e cable properly terminated will carry 10GBase-T without issue. In fact if you have problems I would first re-terminate the cable before assuming you need to run new cable. Cat-6 or 6A just isn't necessary.

As a PSA: beware of "CCA". I've noticed Amazon and eBay are absolutely flooded with cheap chinese electrical and networking cable that shows nice shiny copper in the pictures but is actually "copper clad aluminum". If they mention anything at all they code it as "CCA" cable without explaining what that means.

CCA cable cannot, by definition, be ethernet cable. I won't get into the full technical details but the standard was amended to clarify that only pure copper wires are acceptable for ethernet. Personally I would not dare use CCA for anything. It has lower performance, lower current-carrying capability for the same wire diameter (inherent in aluminum), and introduces the risk of oxidation and loosening of connections as people will treat them as copper connections when aluminum needs special installation procedures and connections to avoid them coming loose over time. For electrical connections especially this not only can but absolutely will lead to a fire over time if not treated with the appropriate care. All it takes is a little bit of mechanical action scraping off the thin copper layer and you now have an effectively aluminum wire - a time bomb ticking away.

reply
apelapan 4 hours ago
I might have been lucky, but in the one home and one office were I've connected 10gbit switches and PCIe cards, it has just worked. Especially the office was a nice surprise, because it is at least 20 meters (probably more) of unknown cabling and at least one unknown patch panel between the utility closet where the NAS lives and the desk area. The cables were run 15 years ago, so I expected it to be cat 5, but clearly not.

It is nice moving/streaming large files across the network at 10 gbit. It really is ten times less waiting than with plain old gigabit.

Of course, most of the time I'm working with lots of small files and then the spinning disk array in the NAS has no chance to saturated the this giant pipe, or even a normal gigabit connection...

reply
loeg 4 hours ago
> The cables were run 15 years ago, so I expected it to be cat 5

FWIW, Cat 5e supplanted Cat 5 25 years ago.

reply
toast0 2 hours ago
> The cables were run 15 years ago, so I expected it to be cat 5, but clearly not.

Did you check the jackets? I've got Cat5 (no e) marked cable running 10gbaseT. A lot of cable exceeds the specs on its jacket and specified wire provides enough signal to noise for the provided length in dense conduit. When you have shorter runs, without dense wiring, lesser cable can work.

reply
kstrauser 2 hours ago
Same. I ran CAT6 from one end of our house to the other, because my home office is in the opposite corner as the fiber coming in to the router. After running that, and manually crimping everything, it Just Worked from the first time I turned everything on. That felt pretty good.
reply
saltcured 4 hours ago
I know when I was doing some custom wiring in a house around 2005-6, it was clear that cat-6e was the thing to use if you wanted any future-proofing.

So I bought a reel of that even though I was only going to be using 1000-BaseT. I don't remember there being too much premium on the wire itself.

reply
loeg 4 hours ago
Do you mean Cat 6 or Cat 5e? Cat 6e isn't a thing and 6a didn't exist in 2006. 6 was certainly more future-proof at the time, although arguably 5e is still fine even today. (Super-gigabit consumer equipment didn't really exist until the last five years and it's till notably more expensive and less common than gigabit, which runs on 5e just fine.)
reply
saltcured 3 hours ago
Cat 6 for sure. I thought there was some extra tweak beyond that, but I probably misremembered after all this time. Perhaps it was just that it was plenum-rated..
reply
globular-toast 2 hours ago
You might be thinking of cat6a which officially supports 10G over long lengths, unlike cat6.
reply
saltcured 14 minutes ago
Nah, it was definitely around 2006 which seems to be before cat 6a was ratified. So I couldn't have bought a reel marked that way...
reply
mikepurvis 4 hours ago
That's pretty wild.

I have 1.5/900 fibre to my house, and I bring a 2.5 line from the modem to my home office where a 2.5 switch delivers it to my workstation, laptop, and unraid NAS. But those devices are all themselves just gigE I think, and I've yet to come up against a download (even a torrent) that seems like it would have really benefitted from having the entire theoretical 1.5 pipe available.

reply
rhplus 4 hours ago
10Gbps is enough bandwidth for 500 concurrent Netflix streams in 4K/UHD (15Mbps) AND 500 concurrent video calls (4Mbps).

Home users don’t need more bandwidth to improve their internet experiences, they need lower latency, less congestion and less loss.

https://help.netflix.com/en/node/306

https://learn.microsoft.com/en-us/microsoftteams/prepare-net...

reply
wpm 4 hours ago
Home users only do video calls and watch Netflix?

More and more regular people are getting network storage appliances. More and more people have laptops with SSDs that can write at 4 or 5 GB/s. Why shouldn't they get to use all of it?

reply
rhplus 3 hours ago
I should have said most home users. My point is that more bandwidth at this point probably won’t affect 99.999% of home users.

What’s described in the post is the tech equivalent of supe-ing up a sports car and then driving it in rush hour traffic. It’s fun to geek out doing it, but practically in everyday use the difference will be negligible. Even with large file uploads and downloads, there’s a good chance that services won’t reach those throughputs end to end.

What’s telling is that the post shows screenshots and charts from artificial speed tests. No videos of the Dropbox client chugging away with throttled uploads.

reply
baby_souffle 2 hours ago
> I should have said most home users. My point is that more bandwidth at this point probably won’t affect 99.999% of home users.

640k should be enough for everybody... DSL should be enough for everybody...

If you build it, they will come.

reply
afavour 4 hours ago
To quote the previous post:

> I've yet to come up against a download (even a torrent) that seems like it would have really benefitted from having the entire theoretical 1.5 pipe available.

There are many things along the way that would get in the way of a home user downloading something from the internet that would hit that 5GB/s speed. It's not that people should be "banned" from it or something, more that the investment cost isn't worth it.

reply
fmajid 3 hours ago
I regularly saturate my 1G home and 1G office connection syncing ~6GB files between the two. It's also nice to be able to download a 100G or so game quickly. Remote backups to cloud storage also benefit from fast upload speeds (and more importantly, restores).
reply
mlyle 3 hours ago
We have a 5gbps pipe; routinely download games from Steam at >3gbps; when I had to reinitialize my cloud backup it was >4gbps. All of this without impacting anyone else on the pipe.

Yah, our P95 bandwidth is just a few megabits per second. But it's not that expensive and routinely saves me a few minutes here and there.

10gbps on the LAN is more broadly useful. Pegging it for a file share is a daily occurrence.

reply
ProfessorLayton 3 hours ago
Also storage has gotten super expensive lately, and rather than upgrading my machines/consoles I've been offloading games and downloading them as needed and now am routinely downloading dozens of GB just to play a game.

My gaming time is limited so the faster the better.

reply
loeg 4 hours ago
How much $ extra are you willing to pay for the extremely occasional transfer at rates higher than gigabit? 2x? 3x?
reply
sandworm101 3 hours ago
Those ssds are very likely cached and so cannot keep that pace for more than a quick burst of a few gigs.
reply
kobalsky 34 minutes ago
> Netflix streams in 4K/UHD (15Mbps)

proper 4k, like the one you watch from a blu-ray, will have peaks of 150mbps.

the 4k we see on streaming services is awfully overcompressed.

and yeah, you can see the difference, it's day and night.

reply
zamadatix 3 hours ago
I regularly hit 4.7 gbps on a 5 Gbps line pulling files (usenet is usually faster than torrent, but the latter can be equally as fast depending on the torrent & how good the client software is). It's great to just grab an entire movie series in 4k Blu Ray remux quality in 5 minutes and go. No real need to plan ahead for anything.
reply
throwforfeds 58 minutes ago
And here I am sitting in Brooklyn and haven't had one apartment that has had fiber as an option. I get to pay Spectrum $90/month for "400/20" and in reality get 100/10.
reply
IshKebab 4 hours ago
Yeah I foolishly paid for 500 Mb/s but the only things I ever get over even 200 Mb/s for in the UK are Steam downloads (and speed tests). Everything else seems to be roughly throttled at that rate.
reply
mikepurvis 3 hours ago
Telus gave me a good price and I'd already invested in a Unifi gateway that could accept the ONT directly, so that was fun to play with.
reply
glitchc 3 hours ago
One thing I'll add that I learnt in the process of doing my own house: It's not just the cable type but also the SFP module that can limit the distance. I used MicroTik hardware and their S+R10J modules are limited to ~30m for 10Gbps speeds.
reply
DeathArrow 12 minutes ago
I wired my home with single mode fiber. It works like a charm and I can always upgrade the speed.
reply
hapless 4 hours ago
10g-base-T is a sick joke

high latency, high error rates, and terrifying heat output from SFPs (which the author noted for himself)

the only cat6 left in my home network is the link to verizon's ont, because in their infinite wisdom the ONLY connectivity offered was 10g-base-t

reply
zamadatix 2 hours ago
10GBase-T can still be nice if you're going to do native PoE (which the author did not) or you expect a mix of devices with 1/2.5/5 which you don't want to or can't upgrade/replace immediately (which is where it sounds like the author was situated).

The new 10GBASE-T SFPs are actually not too bad - you can get the full 100 meters at half the wattage it takes for the old space heater generation ones to reach 30 meters. Based on the article, the author did not know there were newer cooler options for the ~the same price.

reply
afavour 4 hours ago
Both impressive and surprising that thermals were the biggest barrier!

Meanwhile I'm sat here wishing I could justify running any ethernet in my apartment, but improving wi-fi tech means I never can...

reply
jauntywundrkind 3 hours ago
I wonder what the idle vs at load difference is for power draw/heat. Would really love to see this feature in reviews some!

I wonder if you could negotiate down to 1gbit until you see some level of activity, if that would help at all?

I'm still eyeing 10Gb, but if my home needs +30w for three computers, I don't feel like it's really worth it. Would love to see more details on the power consumption from folks, especially tuning for idle.

reply
cyberax 3 hours ago
The Mikrotik switch is awesome, and it's still the most compact 10G switch available.

You can fix the thermal issue either by adding a small fan (Noctua is great) or by adding more radiators: https://pics.ealex.net/share/UxeSf_AWHLIuc-qzK5zl7JIgQvQDAZh...

I've been running it like this in a closed comm box for the last 3 years without any issues. SFP+ modules actually do not use that much power, it's just that it's concentrated into a small package, resulting in high temps.

reply
bot403 3 hours ago
Cripes. When possible just do fiber and DACs. Faster and much cooler than 10Gbit. 10Gbit uses an absurd amount of power per port thus the need for all that cooling.
reply
m463 2 hours ago
yeah, the article though says...

"The apartment has structured cabling -- each room has one or more RJ45 sockets in the wall," ...

Which is the main problem most folks face.

wish the standard was "conduit" instead of "bake-this-years-tech-into-the-wall" which doesn't always last...

reply
ssl-3 54 minutes ago
Folks have been saying that conduit is the way, and the fiber is the future, and all kinds of things like that for decades so far.

But the simple truth for all those decades is this: When there's already cat-whatever cable in the wall, it generally still works.

Decently-installed conduit (ie, actually-usable conduit) adds a ton of time and expense, which is why it is very seldom used for data circuits in residential structures.

The cable that exists is a lot better than the conduit that doesn't. And copper ethernet is bog-standard like MP3 is: It isn't the best in any technical sense at all, but everything supports it. Universal compatibility is pretty nice.

---

So the ongoing cost of copper 10gbe is electricity. Someone else here in the comments says that a copper 10GBe SFP+ module can use ~3 Watts, or that a newer one can use about 1.5 Watts.

We can be generous by using the larger figure of 3 Watts, and 8 devices..

With 4 ports, eight 10-gig endpoints @ 3 Watts each, and $0.19 per kWh [delivered]: That's $3.28 per month, or about $400 per decade.

If we assume 1.5 Watt endpoints, then that number halves.

If we subtract the power consumption of fiber SFP+ modules (or media converters or whatever) to make the number a relative comparison instead of an absolute, then that figure goes down further.

Not so bad, compared to conduit.

reply
cyberax 2 hours ago
DACs are terrible, they are unwieldy and are always either too short or too long.

Fiber is much better, but in my case the house already had Cat5 Ethernet wiring (originally used just for phones!) everywhere.

Another use case for 10G Ethernet is PoE for the WiFi access points. Although you can't use SFP+ modules for that, of course.

reply
rrevi 4 hours ago
The device nomenclature alone is worth the read! (otherwise an impressive feat to see)
reply
tamimio 51 minutes ago
Regarding the proxmox cluster, I would advise not to set up a cluster unless you really need to, it adds complexity in order of shutting down the server, HA, Quorum votes, ceph monitor, among many others, and if one server goes offline, or even the wrong order, it will impact the others and sometimes some tough time in recovery, or data loss.
reply
globular-toast 2 hours ago
I put cat6a in my last house and plan to in my current house "just in case". The cable isn't that much more expensive and nobody wants to do cabling again so why not.

But I'll keep using a gigabit switch because I have absolutely no idea what I'd use 10G for. It's crazy that gigabit was affordable for me as a student in the early 00s and between then and now we've gone from DVDs to 4K and it's still plenty fast enough. In fact, most people are happy with WiFi (not me, though).

reply
gigel82 4 hours ago
I was surprised that the old Cat5e in my home supports 10Gbps without any issue, so went ahead and upgraded the rest of the network with 10Gbps switches (expensive Ubiquiti gear, but worth it to talk at 10Gbps between all my machines, even though the internet is only 5Gbps Fiber).
reply
thefz 2 hours ago
For me the threshold has always been "can I stream a 4K movie from the NAS downstairs or from my seed box". No real need for anything above. Still I ran 10Gb single mode fiber in all the ducts.
reply
Boss0565 4 hours ago
What's the point?
reply