Visual studio is a dog but at least it's one dog - the real hell on windows is .net framework. The sheer incongruency of what version of windows has which version of .net framework installed and which version of .net your app will run in when launched... the actual solution at scale for universal windows compatibility on your .net app is to build a c++ shim that checks for .net beforehand and executes it with the correct version in the event of multiple version conflict - you can literally have 5 fully unique runtimes sharing the same .net target.
If you somehow experience an actual dependency issue that involves glibc itself, I'd like to hear about it. Because I don't think you ever will. The glibc people are so serious about backward and forward compatibility, you can in fact easily look up the last time they broke it: https://lwn.net/Articles/605607/
Now, if you're saying it's a dependency issue resulting from people specifying wrong glibc version constraints in their build… yeah, sure. I'm gonna say that happens because people are getting used to pinning dependency versions, which is so much the wrong thing to do with glibc it's not even funny anymore. Just remove the glibc pins if there are any.
As far as the toolchain as a whole is concerned… GCC broke compatibility a few times, mostily in C++ due to having to rework things to support newer C++ standards, but I vaguely remember there was a C ABI break somewhere on some architecture too.
What? There was a huge breakage literally last year: https://sourceware.org/bugzilla/show_bug.cgi?id=32653
Glibc has been a source of breakage for proprietary software ever since I started using Linux. How many codebases had to add this line around 2014 (the year I brought my first laptop)?
__asm__ (".symver memcpy, memcpy@GLIBC_2.2.5");dlopen and dlmopen no longer make the stack executable if a shared library requires it
I'm not counting intentional breakage to improve system security. I'm not even sure I'd call it an ABI breakage; by a wide definition I guess it is a "change in glibc that makes things not work anymore". You also can't execute a.out binaries anymore, eh. And I don't think I would call something that affects primarily closed source binaries (and mono on i386) a "huge" issue either.
The fix is trivial ("execstack -s <binary>") and doesn't involve any change to installed versions of anything.
> __asm__ (".symver memcpy, memcpy@GLIBC_2.2.5");
A 12 year old forward compatibility issue that is fixed by upgrading glibc, okay. (Note this is the same timeframe as the s390 thing I linked.) I guess people shipping binaries need to be aware of it if they want to support glibc pre-2.2.14. That said, the general rule of shipping binaries is that you need to assume whatever you build against becomes the minimum required version, anything else is a gift.
I think my point about never pinning glibc stands, and how many other things do you know where you need to go back 12 years to find an ABI break?
Sure, the ABI does not change at all, if you know in advance where the landmines are. Though it does have quite few, especially for a large project with a huge surface area
Also, just in case it's not clear, glibc has nothing to do with GCC.
A security update that also does breaking changes is sort of the worst case, because dependents are essentially damned if they apply it and damned if they don't. They can't always be avoided if an API is so insecure it's beyond repair - but then dependents will have to update in their own time because they will also have to fix their own implementations. So this would be an argument for version pins in that case.
Only the latest .NET Framework 4.8 is shipped with Windows at this point.
.NET 10 supports a Windows 10 build from 10 years ago.
We had just deprecated support for XP in 2020 - this was for a relatively large app publisher ~10M daily active users on windows. The installer was a c++ stub which checked the system's installed .NET versions and manually wrote the app.config before starting the .net wrapper (or tried to install portable .NET framework installer if it wasn't found at all).
The app supported .NET 3.5* (2.0 base) and 4 originally, and the issue was there was a ".NET Framework Client Profile" install on as surprising amount of windows PCs out there, and that version was incompatible with the app. If you just have a naked .NET exe, when you launch it (without an app.config in the current folder) the CLR will decide which version to run your app in - usually the "highest" version if several are detected... which in this case would start the app in the lightweight version and error out. Also, in the app.config file you can't tell it to avoid certain versions you basically just say "use 4 then 2" and you're up to the mercy of the CLR to decide which environment it starts you in.
This obviated overrides in a static/native c++ stub that did some more intelligent verifications first before creating a tailored app.config and starting the .net app.
I feel for those who have to support an OS no longer supported by the vendor. That's a tough position to be in, not only if a customer comes across a bug that is due to the OS, but it keeps you from advancing your desktop application forward.
I’m always kind of sad when a developer says to a customer “your OS is too old. We are dropping you on the floor.”
.NET Framework should only be used for legacy applications.
Unfortunately there are still many around that depend on .NET Framework.
Microsoft sadly doesn't prioritize this so this might still be the case for a couple of years.
One thing I credit MS for is that they make it very easy to use modern C# features in .NET Framework. You can easily write new Framework assemblies with a lot of C# 14 features. You can also add a few interfaces and get most of it working (although not optimized by the CLR, e.g. Span). For an example see this project: https://www.nuget.org/packages/PolySharp/
It's also easy to target multiple framework with the same code, so you can write libraries that work in .NET programs and .NET Framework programs.
The current solution is to use the CLI tools just like C++.
However have you looked into ComWrappers introduced in .NET 8, with later improvements?
I still see VB 6 and Delphi as the best development experience for COM, in .NET it wasn't never that great, there are full books about doing COM in .NET.
Because that’s pretty much any freaking thing - oh Python, oh PHP, oh driving a fork lift, oh driving a car.
Once you invest time in using and learning it is non issue.
I do get pissed off when I want to use some Python lib bit it just doesn’t work out of the box, but there is nothing that works out the box without investing some time.
Just like a car get a teenager into a car he will drive into first tree.
Posting BS on Facebook shouldn’t be benchmark for how easy things should be.
I want to focus on the project itself; not jump through hoops in the build process. It feels hostile.
For cross compiling to ARM from a PC in rust in particular, you do one CLI cmd to add the target. Then cargo run, and it compiles, flashes, with debug output.
These are from anecdotes. I am probably doing something wrong, but it is my experience so far.
apt install build-essential or whatever the rpm equivalent is, gets you most of the way to building a C or C++ project.
My perspective: I want from the OS: An allocator, threading, filesystem support, dates/times, and in some cases hardware access like for GPUs, USB etc.
I do not want my software to be dependent on a specific OS's package manager. I don't want a headache when I change the PC I'm compiling on, and really don't want to deal with a separate package manager for each OS I distribute the application for. Especially so given that there are so many linux distros.
However, there were version problems: some Linux distributions had only stable packages and therefore lacked the latest updates, and some had problems with multiple versions of the same library. This gave rise to the language-specific package managers. It solved one problem but created a ton of new ones.
Sometimes I wish we could just go back to system package managers, because at times, language-specific package managers do not even solve the version problem, which is their raison d'être.
Had fewer issues on EndeavourOS (Arch) compared to Fedora overall though... I will stay on Arch from now on.
That seems more a property of npm dependency management than linux dependency management.
To play devil's advocate, the reason npm dependency management is so much worse than kernel/os management, is because their scope is much bigger, 100x more package, each package smaller, super deep dependency chains. OS package managers like apt/yum prioritize stability more and have a different process.
uv has more of less solved this (thank god). Night and day difference from Pip (or any of the other attempts to fix it honestly).
At this point they should just deprecate Pip.
I’d really love to understand why people get so mad about pip they end up writing a new tool to do more or less the same thing.
1. It's easy to install.
2. It's not dog slow.
3. It automatically sets up venvs.
4. It automatically activates the venv before running commands.
5. It supports a lock file out of the box.
6. It lets you specify the index for private packages. This fixes a major security issue with pip that they continue to ignore.
7. It installs your code in the venv in editable more by default.
8. It lets you install Python tools outside the venv (`uv tool install`).
9. It works reliably.
There's probably more that I've forgotten. If pip had all of that nobody would have felt the need for a rewrite. And if you've never run into any of those issues I guess you either haven't used Python much or didn't consider that there might be a less shitty way to do things.
That’s very subjective. I have ADHD and I’m very sensitive to things that break my flow, but I don’t run pip frequently while I’m writing code, and the couple extra seconds it takes to do its job don’t bother me that much.
> 3. It automatically sets up venvs.
Remember: explicit is better than implicit.
> 4. It automatically activates the venv before running commands.
Again: explicit is better than implicit.
> 5. It supports a lock file out of the box.
Some (me included) would say it’s a bug, not a feature.
> 6. It lets you specify the index for private packages. This fixes a major security issue with pip that they continue to ignore.
This is good - a lot of organisations concerned with that will block access to PyPI altogether and offer a selected cache for approved dependencies. Alternatively you can declare private dependencies as Git URLs with tags, bypassing the need for a private index.
> 7. It installs your code in the venv in editable more by default.
That’s a nice touch, but goes against the explicit/implicit rule- a different behaviour automatically triggered by something environmental.
> 8. It lets you install Python tools outside the venv (`uv tool install`).
I always recommend not messing with the system’s Python.
> 9. It works reliably.
I’ll need you to elaborate on that. I might be used to the way pip works, but it’s been a while since I had it fail to do something I asked without there being a very good reason (an impossible conflicting requirement).
This brings me to another concern - I wouldn’t want Python development to become more like JavaScript post-npm. Every external dependency you bring in must be justified and understood. This goes for simple ones and especially for complex ones with lots of secondary dependencies. Any external dependency you bring is now your responsibility to manage forever. I’m fine not having to reimplement NumPy’s MSE function or Django’s ORM, but I’ve moved away from things like click because they save me just a little extra work at the cost of having to remember it’s there forever.
> And if you've never run into any of those issues I guess you either haven't used Python much or didn't consider that there might be a less shitty way to do things.
Or, perhaps, I’ve been using Python since 1.5 and have a deep understanding of why things are the way they are, having experimented with different ways, and just learned that new and convenient isn’t always the best in the long term.
It's not implicit. When you run `uv run ...` or `uv sync` you are explicitly asking it to automatically set up a venv.
> Some (me included) would say it’s a bug, not a feature.
Some (you included) would be wrong.
> This is good
What is good? That Pip doesn't have a way to avoid dependency confusion? What??
> but goes against the explicit/implicit rule
No it doesn't. It just does the right thing by default.
> I always recommend not messing with the system’s Python.
The reasons to avoid `pip install pre-commit` (outside a venv) are precisely because Pip can't do that in a sane way! `uv tool install pre-commit` suffers from none of the reasons to avoid `pip install pre-commit`.
You definitely have stockholm syndrome. Give uv a try.
Continuing to use Pip because Astral might stop maintaining uv in future is stupidly masochistic.
That's where I stopped.
Toolchains on linux distributions with adults running packaging are just fine.
Toolchains for $hotlanguage where the project leaders insist on reinventing the packaging game, are not fine.
I once again state these languages need to give up the NIH and pay someone mature and responsible to maintain packaging.
And when it inevitably leads to all kinds of weird issues the packagers of course can't be reached for support, so users end up harassing the upstream maintainer about their "shitty broken application" and demanding they fix it.
Sure, the various language toolchains suck, but so do those of Linux distros. There's a reason all-in-one packaging solutions like Docker, AppImage, Flatpak, and Snap have gotten so popular, you know?
This is only the case for debian and derivatives, lol. Rolling-release distributions do not have this problem. This is why most of the new distributions coming out are arch linux based.
> ...every package having several dozen patches trying to make a brand-new application release work with a decade-old release of libfoobar.
Applying non-vanilla flavor (patches) to libraries in able to make new packages work with old packages. (It's not just a library thing of course--I've run into packages on Debian where some component gets shimmed out by some script that calls out to some script to dynamically build or download a component. But I digress.)
Maybe I'm just out of the loop here, but I'm not aware of this being a general practice in Fedora. Yes, Fedora does a lot of compatibility work of course, but afaik the general practice isn't to add Fedora-flavored patches.
> with every package having several dozen patches trying to make a brand-new application release work with a decade-old release of libfoobar.
Quite frankly, as someone started distro-hopping around ~2009 & only stopped around 2020, I have experienced a lot of Linux distributions, as well as poked at a lot of maintainer pipelines — it is simply categorically untrue for the majority of non-Debian derived Linux distributions.
It used to be that a decent number of Linux distributions (Slackware, Debian, RedHat, whatever) put in a lot of work to ensure "stability", yes. However "stability" was, for the most part, simply backporting urgent security fixes to stable libraries and to their last three or so versions. The only system that is very well known for shipping "a decades old version" of a system library would be Debian (or its biggest derivative, Ubuntu), because it's release cycle is absolutely glacial, and most other Linux distributions do give somewhat of a shit about keeping relatively close to upstream. If only because maintaining a separate tree for a program just so you can use an ancient library is basically just soft-forking it, which incurs a metric shitton of effort and tech-debt that accrues over time.
One of the reasons I switched to and then ran at least one single Arch Linux installation for the back half of the 2010s (and used other computers or a dual boot for distrohopping) was partly for the challenge (I used to forget to read the NEWS before upgrading and got myself into at least one kernel panic that way lol), and partly because it was the only major rolling-release distribution at the time. In the last 6 years that's changed a lot, and now there's a whole slew of rolling-release distributions to choose from. The biggest is probably Steam's Holo distribution of Arch, but KDE's new distribution (replacing Kubuntu as the de-facto "KDE Distro") is based on Arch as well, along with I think Bazzite and CachyOS; Arch has always had a reputation (since before the 2010s) for keeping incredibly close to upstream in it's package distributions, and I think that ideology has mostly won-out now that a lot of Linux software is more generally stable and interacts reasonably well (Whereas, back when Pipewire was a thing... that was not the case).
Now, sure, I'm not going to the effort of compiling my decade+ experience of Linux into a spreadsheet of references of every major distribution I tried in the last ten years, just to prove all of this on an internet forum, but hopefully that write-up will suffice. Furthermore, as far as I can see, the burden of proof is not on me, but on `crote` to prove the claim they made.
Also, I get how if you've only ever used Debian-derivatives, all of this may appear to be incorrect. I would suggest, uh, not doing that — if only because it's a severely limiting view of what Linux systems are like. Personally, I've been a big fan of using Alpine's Edge release since before they had to drop linux-hardened and it's a really nice system to use as a daily driver.
I am so fed up with this! Please if you're writing an article using LLMs stop writing like this!
“This isn’t just [what the thing literally is]; it’s [hyperbole on what the thing isn’t].”
In the UK, Marks and Spencer have a long-running ad campaign built around it (“it’s not just food, it’s...”)
Em dashes are fine too.
Er, sorry. I meant: the purpose isn't just drama—it's a declaration of values, a commitment to the cause of a higher purpose, the first strike in a civilizational war of independence standing strong against commercialism, corporatism, and conformity. What starts with a single sentence in an LLM-rewritten blog post ends with changing the world.
See? And I didn't even need an LLM to write that. My own brain can produce slop with an em dash just as well. :)
That style of writing has been around forever. LLM's learned it from us. I'd basically call it "American sales pitch". It's a little bit product landing page, a little bit political opinion column, a little bit self-help book or motivational blogger.
It's always been a style of writing that tries to maximize engagement. The issue is that now we see it creeping into areas that never used to use it. It's not how developers tend to write. But now developers toss their original draft into an LLM asking it to "punch it up for engagement" -- or the LLM has just been trained to assume that's what someone wants by default -- and so now it stands out like a sore thumb.
And obviously, it's an appeal to emotion. Whereas developers tend to be looking for just the cold hard facts. So it's doubly off-putting.
You can then build a script/documentation that isolates your specific requirements and workloads:
https://learn.microsoft.com/en-us/visualstudio/install/use-c...
Had to do this back in 2018, because I worked with a client with no direct internet access on it's DEV/build machines (and even when there was connectivity it was over traditional slow/low-latency satellite connections), so part of the process was also to build an offline install package.
(And - it is better on a shared-machine to have everything installed "machine-wide" rather than "per-user", same as PowerShell modules - had another client recently who had a small "C:" drive provisioned on their primary geo-fenced VM used for their "cloud admin" team and every single user was gobbling too much space with a multitude of "user-profile" specific PowerShell modules...)
But - yes, even with a highly trimmed workload it resulted in a 80gb+ offline installer. ... and as a server-admin, I also had physical data-center access to load that installer package directly onto the VM host server via external drive.
> curl -L -o msvcup.zip https://github.com/marler8997/msvcup/releases/download/v2026...
No thanks. I’m not going to install executables downloaded from an unknown GitHub account named marler8997 without even a simple hash check.
As others have explained the Windows situation is not as bad as this blog post suggests, but even if it was this doesn’t look like a solution. It’s just one other installation script that has sketchy sources.
Just like those complaining about curl|sh on Linux, you are confusing install instructions with source code availability. Just download the script and read it if you want. The curl|sh workflow is no more dangerous that downloading an executable off the internet, which is very common (if stupid) and attracts no vitriol. In no way does it imply that you can not actually download and read the script - something that actually can't be done with downloaded executables.
You do because the downloaded ZIP contains an EXE, not a readable script, that then downloads the compiler. Even if you skip that thinking "I already have VS set up", the actual build line calls `cl` from a subdirectory.
I'm not going to reconstruct someone's build script. And that's just the basic example of a one file hello world, a real project would call `cl` several times, then `link`, etc.
Just supplying a SLN + VCXPROJ is good enough. The blog post's entire problem is also solved by the .vsconfig[1] file that outlines requirements. Or you can opt for CMake. Both of these alternatives use a build system I can trust over randomgithubproject.exe, along with a text-readable build/project file I can parse myself to verify I can trust it.
1: https://learn.microsoft.com/en-us/visualstudio/install/impor...
It actually is for a lot of subtle reasons, assuming you were going to check the executable checksum or something, or blindly downloading + running a script.
The big thing is that it can serve you up different contents if it detects it's being piped into a shell which is in theory possible, but also because if the download is interrupted you end up with half of the script ran, and a broken install.
If you are going to do this, its much better to do something like:
sh -c "$(curl https://foo.bar/blah.sh)"
Though ideally yes you just download it and read it like a normal person.[0] https://github.com/marlersoft/zigwin32 [1] https://github.com/microsoft/win32metadata
But if this is LLM content then it does seem like the LLMs are still improving. (I suppose the AI flavour could be from Grammarly's new features or something.)
This was either written by Claude or someone who uses Claude too much.
I wish they could be upfront about it.
It's hated by everyone, why would people imitate it? You're inventing a rationale that either doesn't exist or would be stupider than the alternative. The obvious answer here it they just used an LLM.
> and clearly it serves some benefit to readers.
What?
It could be involuntary. People often adopt the verbal tics of the content they read and the people they talk with.
So, someone who falls on the side of not completely hating LLMs for everything (which is most people), could easily copy the style by accident.
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing#...
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing#...
The shitty AI writing is so distracting I had to stop reading.
We need a dictionary like this :D
Know what's more annoying than AI posts? Seeing accusations of AI slop for every. last. god. damned. thing.
So if you see LinkedInglish on LinkedIn, it may or may not be an LLM. Outside of LinkedIn... probably an LLM.
It is curious why LLMs love talking in LinkedInglish so much. I have no idea what the answer to that is but they do.
The actual mechanism, I have no clue.
This is purely an artifact of training and has nothing to do with real human writing, which has much better variety.
I came back around 2017*, expecting the same nice experience I had with VB3 to 6.
What a punch in the face it was...
I honestly cannot fathom anyone developing natively for windows (or even OSX) at this day and age.
Anything will be a webapp or a rust+egui multi-plataform developed on linux, or nothing. It's already enough the amount of self-hate required for android/ios.
* not sure the exact date. It was right in the middle of the WPF crap being forced as "the new default".*
does proton make current vb development as straight forward as it was on vb6?
What if it was?
What if it wasn't?
What if you never find out definitely?
Do you wonder that about all content?
If so, doesn't that get exhausting?
I completely agree with your parent that it's tedious seeing this "fake and gay" problem everywhere and wonder what an unwinnable struggle it must be for the people who feel they have to work out if everything they read was AI written or not.
I hardly ever go through a post fisking it for AI tells, they leap out at me now whether I want them to or not. As the density of them increases my odds of closing the tab approach one.
It's not a pleasant time to read Show HNs but it just seems to be what's happening now.
Exactly!
I personally like the content and the style of the article. I never managed to accept going through the pain to install and use Visual Studio and all these absurd procedures they impose to their users.
[1] https://www.pangram.com/history/300b4af2-cd58-4767-aced-c4d2...
Alternatively, there's this:
Install Visual Studio Build Tools into a container to support a consistent build system | Microsoft Learn
https://learn.microsoft.com/en-us/visualstudio/install/build...
I don't understand how open source projects can insist on requiring a proprietary compiler.
To answer your question, the headers.
Just use Clang + MSVC STL + WinSDK. Very simple.
Care to elaborate?
In the year 2026 there is no reason to use MinGW. Just use Clang and target MSVC ABI. Cross-compiling Linux->Windows is very easy. Cross-compiling Windows->Linux is 1000x harder because Linux userspace is a clusterfuck of terrible design choices.
If a project "supports Windows" by only way of MinGW then it doesn't really support Windows. It's fundamentally incompatible with how almost all Windows software is developed. It's a huge red flag and a clear indicator that the project doesn't actually care about supporting Windows.
*taps on the name of this site*
If I'm writing some cross-platform bit of software, my interest in supporting Windows is naturally in producing binaries that run on Windows.
Why on earth should I give a flying toss how "almost all Windows software is developed", or which kinds of ABIs are BillG-kissed and approved? Good god. Talk about fetishising process over outcome.
If your focus is on outcome that I promise and assure you that using MinGW will make producing a positive outcome significantly harder and more frustrating.
With modern Clang there really isn’t a justifiable reason to use MinGW.
I never had issues with C ABI, calling into other DLLs, creating DLLs, COM objects, or whatever. I fail to see what is fundamentally incompatible here.
`winget install --id Microsoft.VisualStudio.2022.BuildTools`.
If you need the Windows(/App) SDK too for the WinRT-features, you can add `winget install --id Microsoft.WindowsSDK.10.0.18362` and/or `winget install --id Microsoft.WindowsAppRuntime.1.8`
I used to just install the desktop development one and then work through the build errors until I got it to work, was somewhat painful. (Yes, .vsconfig makes this easier but it still didn’t catch everything when last I was into Windows dev).
Newer C# features like ref returns, structs, spans, et. al., make the overhead undetectable in many cases.
"winget install Microsoft.VisualStudio.BuildTools"
"winget install Microsoft.WindowsSDK.10.0.26100"
Every language should have a tool like Python uv.
Install multiple versions of Windows SDK. They co-exist just fine; new versions don’t replace old ones. When I was an independent contractor, I had 4 versions of visual studio and 10 versions of windows SDK all installed at once, different projects used different ones.
The only issue currently plaguing Windows development is the mess with WinUI and WinAppSDK since Project Reunion, however they are relatively easy to ignore.
People don't need any UNIX biases to just want multiple versions of MSVS to work the way Microsoft advertises. For example, with every new version of Visual Studio, Microsoft always says you can install it side-by-side with an older version.
But every time, the new version of VS has a bug in the install somewhere that changes something that breaks old projects. It doesn't break for everybody or for all projects but it's always a recurring bug report with new versions. VS2019 broke something in existing VS2017 installs. VS2022 broke something in VS2019. etc.
The "side-by-side-installs-is-supposed-to-work-but-sometimes-doesn't" tradition continues with the latest VS2026 breaking something in VS2022. E.g. https://github.com/dotnet/sdk/issues/51796
I once installed VS2019 side-by-side with VS2017 and when I used VS2017 to re-open a VS2017 WinForms project, it had red squiggly lines in the editor when viewing cs files and the build failed. I now just install different versions of MSVS in totally separate virtual machines to avoid problems.
I predict that a future version VS2030 will have install bugs that breaks VS2026. The underlying issue that causes side-by-side bugs to re-appear is that MSVS installs are integrated very deeply into Windows. Puts files in c:\windows\system32, etc. (And sometimes you also get the random breakage with mismatched MSVCRT???.DLL files) To avoid future bugs, Microsoft would have to re-architect how MSVS works -- or "containerize" it to isolate it more.
In contrast, gcc/clang can have more isolation without each version interfering with each other.
I'm not arguing this thread's msvcup.exe tool is necessary but I understand the motivations to make MSVS less fragile and more predictable.
That's why docker build environments are a thing - even on Windows.
Build scripts are complex, and even though I'm pretty sure VS offers pretty good support for having multiple SDK versions at the same time (that I've used), it only takes a single script that wasn't written with versioning in mind, to break the whole build.
But this isn’t true. Many distros package major versions of GCC/LLVM as separate packages, so you install and use more than one version in parallel, no Docker/etc required
It can indeed be true for some things-such as the C library-but often not for the compilers
https://developers.redhat.com/articles/2025/04/16/gcc-and-gc...
and use some scripts (chroot or LD_LIBRARY_PATH maybe, not an expert) to create a separate environment for the given toolset.
I wouldn't start an app in most of them today, but I wouldn't rewrite one either without a good reason.
There’s a fun bug on WPF and form backgrounds for example which means on fractional DPI screens the background is tiled unpredictably. Had to patch that one up rather quickly one day and it was a mess due to how damn complicated WPF is.
Programming Windows with MFC, Second Edition Subsequent Edition by Jeff Prosise (Author)
Programming Windows®, Fifth Edition (Microsoft Programming Series) Subsequent Edition by Charles Petzold (Author)
In the span of ~2hrs I didn't manage to find a way to please Zig compiler to notice "system" libraries to link against.
Perhaps I'm too spoiled by installing a system wide dependency in a single command. Or Windows took a wrong turn a couple of decades ago and is very hostile to both developers and regular users.
The system libraries should only ship system stuff: interaction with the OS (I/O, graphics basics, process management), accessing network (DNS, IP and TLS). They should have stable APIs and ABIs.
Windows isn't hostile. It has a differnt paradigm and Unix (or more correctly usually GNU/Linux) people do not want to give up their worldview.
PCRE is basically only your apps's dependency. It has nothing else to do the rest of the operating system. So it is your responsibility to know how to build and package it.
All dependencies should be vendored into your project.
At $workplace, we have a script that extracts a toolchain from a GitHub actions windows runner, packages it up, stuffs it into git LFS, which is then pulled by bazel as C++ toolchain.
This is the more scalable way, and I assume it could still somewhat easily be integrated into a bazel build.
Edit: Uses a shit load less actual energy than full-building a product thousands of times that never gets run.
I have a vague memory of stumbling upon this hell when installing the ldc compiler for dlang [1].
1. https://wiki.dlang.org/Building_and_hacking_LDC_on_Windows_u...
That package manager command, at the very least, pulls in 50+ packages of headers, compilers, and their dependencies from tens of independent projects, nearly each of them following its own release schedule. Linux distributions have it much harder orchestrating all of this, and yet it's Microsoft that cannot get its wholly-owned thing together.
How to match it with cuda to compile from source the repos?
Then you also specify target platform sdk versions in the .csproj file and VS will automatically prompt the developer to install the correct toolchain.
[0] https://learn.microsoft.com/en-us/dotnet/core/tools/global-j...
What you’re actually wanting here is .vsconfig https://learn.microsoft.com/en-us/visualstudio/install/impor...
* I wonder if Microsoft intentionally doesn't provide this first party to force everyone to install VS, especially the professional/enterprise versions. One could imagine that we'd have a vsproject.toml file similar to pyproject.toml that just does everything when combined with a minimal command line tool. But that doesn't exist for some reason.
You still have to install the tool that processes pyproject.toml so that doesn’t seem fair to hold against it. You are right that you still have to know whether to install 2022 or 2026.
This tool would be a great help if I knew beforehand.
This is fantastic and someone at Microslop should take notes.
LLVM doesn't come with the C library headers (VCRuntime) or the executable runtime startup code (VCStartup).Both of which are under Visual Studio proprietary licenses. So to use Clang on Windows without Mingw, you need Visual Studio.
Additionally the cross-compiler on Linux also produces binaries with no extra runtime requirements.
Compared to older Mingw64 environments those link with the latest UCRT so you get almost the same style executable as Visual Studio.
The only difference for C is that it uses Mingw exception handling and global initialization code, and it uses Itanium ABI for C++.
A major part of the incompatibility with older versions of Windows is just because newer VS runtimes cut the support artifically. That's it. Many programs would otherwise work as-is or with just a little help.
The Windows-native software you build with MSYS2 can be shipped to and run by users that don’t have anything of MSYS2 installed.
Git installs its own Mingw and Msys2 stuff but mostly compiled for a Mingw environment so they consume Windows paths natively instead of using MSYS2/Cygwin path conversion. That's why when you have mixed PATH variable all hell breaks loose with Git.
Doesn't it come with `pacman` too?
MSYS2 is there to just provide the basics so you can develop programs that are Windows native but use some of the tools that have really strong Unix dependence like shells or Make. They depend on the existence of syscalls like `fork` or forward slash being the directory seperator.
Without Cygwin enabling the path, it wouldn't be possible to build GCC for Windows without completely changing its build system. It would be a DOA fork while Mingw and PE32+ support is a part of GCC nowadays.
The nice and genius part of MSYS2 is that it is there to primarily encourage you to develop native Windows software that has better cross-platform behavior rather than Cygwin alone. If Microsoft made a better, free of charge C compiler in early 2000s that is adhering to the standards better, we wouldn't probably need Mingw to build cross-platform apps. Now MSVC is still free of charge for only open source and individuals.
[1] "Cygwin POSIX emulation engine", https://packages.msys2.org/base/msys2-runtime [2] https://github.com/msys2/MSYS2-packages/tree/master/msys2-ru...
It gives you a *nix-like shell/dev environment and tools, but you build native software that runs on Windows systems that don’t have or need to have all/parts of MSYS2/Cygwin installed.
I built a network daemon using the MSYS2 CLANG64 environment and llvm toolchain on Windows 10.
Windows 7 x64 users could download the compiled single-file executable and run it just fine, so long as they installed Microsoft’s Universal C Runtime, which is a free download from Microsoft’s website.
I get your point. Although my point is that there is actually zero need for MSYS at all for this, even as a developer, and especially not with the 'CLANG64' environment. These binaries themselves are built to run in the MSYS2 environment This is how I cross-compile from Windows... to Windows with LLVM-MinGW[1]:
> (gci Env:PATH).Value.Split(';') | sort
> clang-21.exe --version
clang version 21.1.2 (https://github.com/llvm/llvm-project.git b708aea0bc7127adf4ec643660699c8bcdde1273)
Target: x86_64-w64-windows-gnu
Thread model: posix
InstalledDir: C:/Users/dpdx/AppData/Local/Microsoft/WinGet/Packages/MartinStorsjo.LLVM-MinGW.UCRT_Microsoft.Winget.Source_8wekyb3d8bbwe/llvm-mingw-20250924-ucrt-x86_64/bin
Configuration file: C:/Users/dpdx/AppData/Local/Microsoft/WinGet/Packages/MartinStorsjo.LLVM-MinGW.UCRT_Microsoft.Winget.Source_8wekyb3d8bbwe/llvm-mingw-20250924-ucrt-x86_64/bin/x86_64-w64-windows-gnu.cfg
[1]: https://github.com/mstorsjo/llvm-mingwI'm certain I haven't misunderstood the point of MSYS2's CLANG64 and other environments.
> These binaries themselves are built to run in the MSYS2 environment
I'm not sure if you're referring to the toolchain binaries or the binaries one produce's with them.
The CLANG64, etc. environments are 100% absolutely for certain for building software that can run outside of any MSYS2 environment!
You can, of course, build executables specifically intended to run inside those environments, but that’s not the primary use case.
> (gci Env:PATH).Value.Split(';') | sort
I don't want to use PowerShell or Cmd.exe when doing dev stuff on Windows. I want to do CLI work and author scripts in and for modern Bash, just like I would for Linux and macOS. I want to write Makefiles for GNU make, just like...
Now, sometimes there are bumps and sharp edges you have to deal with via `if [[ -v MSYSTEM ]]; then`, similar in Makefile, cygpath conversion, template/conditional code in sources, and so on. But that's a small price to pay, from my perspective, for staying in the same mental model for how to build software.
There. I think that sums it up.
Just give me a VM. Then you will know, and I will know, every facet of the environment the work was done in.
https://gist.github.com/mmozeiko/7f3162ec2988e81e56d5c4e22cd...
If you're just a guy trying to compile a C application on Windows, and you end up on the mingw-w64 downloads page, it's not exactly smooth sailing: https://www.mingw-w64.org/downloads/
Supporting Windows without MinGW garbage is really really easy. Only supporting MinGW is saying “I don’t take this platform seriously so you should probably just ignore this project”.
Has anyone tried doing this on ReactOS? I know this is a touch DIY, but it would be interesting to know if Win sofware could be built on ReactOS...
Really? A 50GB IDE? How the heck one knows what goes in there?
My beloved FreeBSD 15.0 PLUS its Linux VM PLUS its docker env PLUS its dependencies and IDE are close to 26Gb and pretty sure I'm taking into account a lot of things I shouldn't, so the actual count is much less than that.
Developing software under a Windows platform is something that I cannot understand, since many many many years ago.
A bug got opened against the rustup installing the headless toolchain by itself at some point. I'll see if I can find it
edit: VSCode bug states this more clearly https://github.com/microsoft/vscode/issues/95745
winget install Microsoft.VisualStudio.2022.BuildTools
What is the minimal winget command to get everything installed, ready for : cl main.cpp ?
Ps: I mean a winget command which does not ask anything, neither in command line, nor GUI ? Totally unattenfed.
To install it all in a single step, and beware I haven't tested this, you're better off downloading and running yourself
vs_buildtools.exe --quiet --add Microsoft.VisualStudio.Workload.VCTools
adding whatever workloads you need.Then you'll need to locate and run vcvarsall.bat to setup the environment, which will require some clever code if you're doing it from PowerShell instead of a .bat, and then you can finally call the compiler.
You’ve never experienced genuine pain in your life. Have you tried to change the GCC compiler version in Linux?
apt install gcc-11
CC=gcc-11 make
?If it’s not packaged and you’ve got to build it yourself, Godspeed. An if you’ve got to change libc versions…
If you are compiling for your native system, yes.
But as soon as you try cross-compiling, you are in for a lot of pain.
Install:
- contrary to the blog post, the entirety of Visual Studio, because the IDE and debugger is *really damn good*.
- LLVM-MinGW[1]
Load the 'VSDevShell' DLL[2] for PowerShell, and you're good to go, with three different toolchains now: cl.exe from VS
clang-cl.exe—you don't need to install this separately in VS; just use the above-mentioned llvm-mingw clang.exe as `clang.exe --driver=cl /winsysroot <path\to\Windows SDK> /vctoolsdir <path\to\VC>`. Or you can use it in GNU-driver-style mode, and use -Xmicrosoft-windows-sys-root. This causes it to target the Windows ABI and links against the VS SDK/VC tools
`clang.exe` that targets the Itanium ABI and links against the MinGW libraries and LLVM libc++.
Done and dusted. Load these into a CMake toolchain and never look at them again.People really like overcomplicating their lives.
At the same time, learn the drawbacks of all toolchains and use what is appropriate for your needs. If you want to write Windows drivers, then forget about anything non-MSVC (unless you really want to do things the hard way for the hell of it). link.exe is slow as molasses, but can do incremental linking natively. cl.exe's code gen is (sometimes) slightly worse than Clang's. The MinGW ABI does not understand things like SAL annotations[3], and this breaks very useful libraries like WIL[4] (or libraries built on top of them, like the Azure C++ SDK[5] The MinGW headers sometimes straight up miss newer features that the Windows SDK comes with, like cfapi.h[6].
[1]: https://github.com/mstorsjo/llvm-mingw
[2]: https://learn.microsoft.com/en-gb/visualstudio/ide/reference...
[3]: https://learn.microsoft.com/en-gb/cpp/c-runtime-library/sal-...
[4]: https://github.com/microsoft/wil
[5]: https://github.com/Azure/azure-sdk-for-cpp
[6]: https://learn.microsoft.com/en-gb/windows/win32/cfapi/build-...
Good to know LLVM works on windows too though.
Not really. It's just different. As a cross-platform dev, all desktop OSs have their own idiosyncracies that add up to a net of 'they are all equally rather bad'.
this seems to go down the road towards attempts at determinsticish builds which i think is probably a bad idea since the whole ecosystem is built on rolling updates and a partial move towards pinning dependencies (using bespoke tools) could get complicated.
I fixed windows native development. Band together friends, force WSL3 as the backbone of Windows.
are we doomed to only read AI slop from now on? to get a couple paragraphs in and suddenly be hit with the realization that is AI?
it's all so tiresome
Step 2. Install your preferred flavor of Linux
Step 3. Set-up dev tools
Step 4. Profit??
What year is it?! Also, haven't heard any complaints regarding VS on MacOS, how ironic...
(To be clear, I haven't tried this with Nix, but I have with other distros.)
You just need the required build tools.
If you've ever had to setup a CI/CD pipeline for a Visual Studio project then you've had to do this.
This script is great. Just use it. The title saying “I fixed” is moderately offensive glory stealing.
What needs to be fixed is the valley between unix and windows development for cross-os/many-compiler builds, so one that does both can work seamlessly.
It's not an easy problem and there are lots of faux solutions that seem to fix it all but don't (in builds, the devil is in edge cases).
> msvcup is inspired by a small Python script written by Mārtiņš Možeiko.
No. Martins fixed. OP made a worse layer on top of Martins great script.
(It would have been better for us to catch this sooner, but in this case someone had to explain the name to me. Out of respect for HN's many Francophone readers, I think it's best to apply the rule.)
* https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
TLDR: I don't understand my native command line, see how lost I got when I tried to do my thing in a different environment.
- Not a unique problem to Windows or even MSVC; He's gonna hate XCode, - Making Python a bootstrap dependency = fail, - Lacks self-awareness to recognize aversion vs avoidance,
My background is distinctly non-Windows, but I survive around Windows so well that people think I'm a Mickeysoft type. And no, I don't use mingw, cygwin, ...
If any of the obstacles this user faced were legitimate, nobody would ever make any money on Windows, including and especially Microsoft - a company whose developers have the same challenges.
I'm being harsh because _mea quondam culpa_ and it's correctable.
Everything this user went thru is the result of aversion instead of avoidance.
To _avoid_ long deep dives into Windows, you need to recognize there is a different vocabulary and a radically different jargon dialect at play.
1. Learn a tiny minimum of Powershell; it's based on the same POSIX spec as bash and zsh, but like Python, Javascript, etc, instead of byte as the fundamental unit, they use objects. So there's less to learn to reach a greater level of convenience than soiling yourself with DOS/CMD/BAT. On Windows, pwsh has a default set of linux-like aliases to minimize the learning required for minimal operability. And never have to type \ instead of / for a directory separator.
2. Microsoft make money from training. To sell their meat-free steak (* ingredient: saw dust), they feed the suits an all-you-can-eat calorie, nutrition, and protein free buffet of documenting everything in great detail and routinely "streamlining" the names and terminology.
Development on Windows is in a different reference frame, but relative to their own reference frames, they're ultimately not all that different.
Approach in your "foreign language" mindset; English alphabet but the words mean different things.
3. What not how. "How do I grep" means you are trying to random access bytes out of a random access character stream. "What's the command to search for text in files?" well, if you're bloody mindedly using cmd, then it's "find".
4. Seriously, learn a little Powershell.
I only approached Powershell hoping to gain material for a #SatansSphincter anti-ms rant while using it as a Rosetta Stone for porting shell scripts in our CI for Windows.
I mean, it is based on the same POSIX spec as sh, bash, and zsh, with a little Perl thrown in. That can't not go horribly, insidiously, 30-rock wrong in the hands of MS, right?
Turned out, it's the same paradigm shift perl/shell users have to make when coming into Python:
from `system("ps | grep hung")` to `"hung" in system("ps")`; from `system("ifconfig -a | sed 's/\<192\.168\.0\./10.0.0./g'")` to `system("ifconfig -a").replace("192.168.0.", "10.0.0.")`
`grep` is a command that applies an assumption to a byte stream, often the output of a command.
In powershell, executing a command is an expression. In the case of a simple command, like "ps", that expression resolves to a String, just like system(...) does in Python.
Learning even a small amount of Powershell is immensely helpful in better understanding your enemy if you're going to have to deal with Windows. The formal names for official things use "verb-singularnoun".
That last part of the convention is the magic: the naming of things on Windows is madness designed to sell certifications, so crazy even MS ultimately had to provide themselves a guide.
Now with AI, I would think that porting a native program to the browser wouldn't be the chore it once was.
As long as you don't give a shit about the fact that your baseline memory consumption is now 500MB instead of 25MB, and that 80% of your CPU time is wasted on running javascript through a JIT and rendering HTML instead of doing logic, no.
If you don't give a shit about your users or their time, there's indeed no longer a need to write native programs.
funny how Electron apps tend to have many more users than their native "performant" counterparts, isn't it?
I did try using python and js but the variable explorer is garbage due to 'late binding'.
I thought this was just my ignorance, but I've asked experts, AI, and google searched and they unfortunately agree. That said, some people have created their own log/prints so they don't need to deal with it.
Incremental compilation, and linking, parallel builds, hot code reloading, REPL, graphical debugging optimised builds, GPU debugging....
Go is better left for devops stuff like Docker and Kubernetes, and Zig remains to be seen when it becomes industry relevant beyond HN and Reddit forums.
I got anxiety reading the article, describing exactly why it sucks. It's nice to know from the article and comments here there are ways around it, but the way I have been doing it was the "hope I check the right checkboxes and wait a few hours" plan. There is usually one "super checkbox" that will do the right things.
I have to do this once per OS [re]install generally.
The Visual Studio toolchain does have LTSC and stable releases - no one seems to know about them though. see: https://learn.microsoft.com/en-gb/visualstudio/releases/2022... - you should use these if you are not a single developer and have to collaborate with people. Back like in the old days when we had pinned versions of the toolchain across whole company.
[1] https://download.visualstudio.microsoft.com/download/pr/5d23...
You only get access to the LTSC channel if you have a license for at least Visual Studio Professional (Community won't do it); so a lot of hobbyist programmers and students are not aware of it.
On the other hand, its existence is in my experience very well-known among people who use Visual Studio for work at some company.
How strict Microsoft is with enforcement of this license is another story.
> Previously, if the application you were developing was not OSS, installing VSBT was permitted only if you had a valid Visual Studio license (e.g., Visual Studio Community or higher).
From (https://devblogs.microsoft.com/cppblog/updates-to-visual-stu...). For OSS, you do not even need a Community License anymore.
You may not compile OSS software developed by your own organisation.
The OSS software must be unmodified, "except, and only to the extent, minor modifications are necessary so that the Open Source Dependencies can be compiled and built with the software."
https://visualstudio.microsoft.com/license-terms/vs2026-ga-d...
> if you and your team need to compile and develop proprietary C++ code with Visual Studio, a Visual Studio license will still be required.
It's why the example they give in the article is a Node.js application with native open source dependencies (e.g. sqlite3).
EDIT: it's clearer when read in context of the opening paragraph:
> Visual Studio Build Tools (VSBT) can now be used for compiling open-source C++ dependencies from source without requiring a Visual Studio license, even when you are working for an enterprise on a commercial or closed-source project.
I don’t need visual to write, read, compile, or link any code using the toolchain.
GPL was made in response to restrictive commercial licensing. Yes is uses the same legal document (a license): but is made in response!
So is propriety seizes to exist, then it's not a problem GPL also seizes to exist.
Also: it's quite obvious to me that IP-law nowadays too much. It may have been a good idea at first, but now it's a monster (and people seem to die because of it: Aaron Swartz and Suchir Balaji come to mind).
https://www.stacksocial.com/sales/microsoft-visual-studio-pr...
At least in the EU, this is legal.
An article about court decision by the EuGH from 2012:
https://www.heise.de/hintergrund/EuGH-Gebrauchte-Softwareliz...
Another court decision from the BGH (the highest German civil court) from 2014 that builds on this EuGH decision:
https://www.heise.de/news/BGH-begruendet-Rechtmaessigkeit-de...
There are licensing constraints, IANL but essentially you need a pro+ license on the account if you're going to use it to build commercial software or in a business environment.
VS 2008 is starting to show the elephantine... no, continental land-mass bloat that VS is currently at, and has a number of annoying bugs, but it's still vastly better than anything after about VS 2012. And the cool thing is that MS can't fuck with it any more. When I fire up VS tomorrow it'll be the exact same VS I used today, not with half a dozen features broken, moved around, gone without a trace, ...