How's your experience with the constantly changing language? How're your update/rewrite cycles looking like? Are there cases where packages you may use fall behind the language?
I know Bun's using zig to a degree of success, was wondering how the rest were doing.
The language and stdlib changing hasn't been a major pain point in at least a year or two. There was some upgrade a couple of years ago that took us awhile to land (I think it might have been 0.12 -> 0.13 but I could be misremembering the exact version) but it's been smooth sailing for a long time now.
These days I'd put breaking releases in the "minor nuisance" category, and when people ask what I've liked and disliked about using Zig I rarely even remember to bring it up.
Also, I'm excited about trying out your language even moreso than Zig. :)
Example programs that you couldn't easily express in other languages?
This might be on purpose given the first words are "Work in progress" and "not ready for release", but linking as above does lose some value.
He wasn't pitching the language directly, but linking to the codebase as that was what was relevant to the comment he was replying to.
These larger zig projects will stick to a tagged release (which doesn't change), and upgrade to newly tagged releases, usually a few days or months after they come out. The upgrade itself takes like a week, depending on the amount of changes to be done. These projects also tend to not use other zig dependencies.
[0]: https://github.com/tigerbeetle/tigerbeetle/pulls?q=is%3Apr+a...
[1]: https://github.com/Syndica/sig/pulls?q=is%3Apr+author%3Akpro...
Have you tried rust? how does it compared to zig?
* just asking
- Zig: Let's have a simple language with as few footguns as possible and make good code easy to write. However we value explicitness and allow the developer to do anything they need to do. C interoperability is a primary feature that is always available. We have run time checks for as many areas of undetermined behaviour as we can.
- Rust: let's make the compiler the guardian of what is safe to do. Unless the developer hits the escape hatch, we will disallow behaviour to keep the developer safe. To allow the compiler to reason about safety we will have an intricate type system which will contain concepts like lifetimes and data mobility. This will get complex sometimes so we will have a macro system to hide that complexity.
Zig is a lot simpler than Rust, but I think it asks more of it's developer.
* except for having unused variables. Those are so dangerous the compiler will refuse the code every time.
Problem solved, everyone happy.
They also made a carriage return crash the compiler so it wouldn't work with any default text files on windows, then they blamed the users for using windows (and their windows version of the compiler!).
It's not exactly logic land, there is a lot of dogma and ideology instead of pragmatism.
Some people would even reply how they were glad it made life difficult for windows users. I don't think they had an answer for why there was a windows version in the first place.
Zig goes for simplicity while removing a few footguns. It's more oriented towards programmer enjoyment. Keep in mind that programmers don't distinguish ease of writing code from ease of writing unforeseen errors.
There are no destructors so all the memory ownership footguns are still there.
Rust is a modern C++/OCaml
So if you enjoy C++, Rust is for you. If you enjoy C and wish it was more verbose and more modern, try Zig.
One can argue Rust is what C++ wanted to be maybe. But C++ as it is now is anything but clean and clear.
It’s like people do it just because Zig is very comparable to C. So the more complex Rust must be like something else that is also complex, right? And C++ is complex, so…
But that is a bit nonsensical. Rust isn’t very close to C++ at all.
For example, high performance servers (voltlane.net), programming languages (https://github.com/HF-Foundation, https://github.com/lionkor/mcl-rs, and one private one), webservers (beampaint.com) and lots of other domains.
Rust is close to C++ in that it is a systems language that allows a reasonable level of zero-cost abstractions.
Writing the compiler toolchains that Rust depends on, industry standards like CUDA, SYSCL, Metal, Unreal or the VFX Reference Platform.
Maybe cranelift will eventually surpass LLVM, but there isn't currently much reason to push for that.
C++ is good for some things regardless of its warts due to ecosystem, and Rust is better in some other ones, like being much safer by default.
Both will have to coexist in decades to come, but we have this culture that doesn't accept matches that end in a draw, it is all about being in the right tribe.
I know, timelines not matching up, etc.
Zig offers no such thing. It would be a like-for-like replacement of an unsafe old language with an unsafe new one. May even be a better language, but that's not enough reason to overcome the burden.
Rust gives us memory safety by default and some awesome ML-ish type system features among other things, which are things we didn’t already have. Memory safety and almost totally automatic memory management with no runtime are big things too.
Go, meanwhile, is like a cleaner more modern Java with less baggage. You might also compare it to Python, but compiled.
Are there any other languages that provide this? Would genuinely consider the switch for some stuff if so.
With Wails it’s also a low friction way to build desktop software (using the heretical web tech that people often reach for, even for this use case), though there are a few GUI frameworks as well.
Either way, self contained executables that are easy to make and during development give you a rich standard library and not too hard of a language to use go a long way!
- It was explicitly intended to "feel dynamically-typed"
- Tries to live by the zen of Python (more than Python itself!)
- Was built during the time it was fashionable to use Python for the kinds of systems it was designed for, with Google thinking at the time that they would benefit from moving their C++ systems to that model if they could avoid incurring the performance problems associated with Python. Guido Van Rossum was also employed at Google during this time. They were invested in that sort of direction.
- Often reads just like Python (when one hasn't gone deep down the rabbit hole of all the crazy Python features)
If you enjoy C and wish it was less verbose and more modern, try Go.
GC is a showstopper for my day job (hard realtime industrial machine control/robotics), but would also be unwanted for other use cases where worst case latency is important, such as realtime audio/video processing, games (where you don't want stutter, remember Minecraft in Java?), servers where tail latency matters a lot, etc.
Which is a very niche use case to begin with, isn't it? It doesn't really contradict what the parent comment stated about Go feeling like modern C (with a boehm gc included if you will). We're using it this way and it feels just fine. I'd be happy to see parts of our C codebase rewritten in Go, but since that code is security sensitive and has already been through a number of security reviews there's little motivation to do so.
My specific use case is yes, but there are a ton of microcontrollers running realtime tasks all around us: brakes in cars, washing machine controllers, PID loops to regulate fans in your computer, ...
Embedded systems in general are far more common than "normal" computers, and many of them have varying levels of realtime requirements. Don't believe me? Every classical computer or phone will contain multiple microcontrollers, such as an SSD controller, a fan controller, wifi module, cellular baseband processor, ethernet NIC, etc. Depending on the exact specs of your device of course. Each SOC, CPU or GPU will contain multiple hidden helper cores that effectively run as embedded systems (Intel ME, AMD PSP, thermal management, and more). Add to that all the appliances, cars, toys, IOT things, smartcards, etc all around us.
No, I don't think it is niche. Fewer people may work on these, but they run in far more places.
Some embedded use cases would be fine with a GC (MicroPython is also a thing after all). Some want deterministic deallocation. Some want no dynamic allocator at all. From what I have seen, far more products are in the latter two categories. While many hobby projects fall into the first two categories. That is of course a broad generalization, but there is some truth to it.
Many products want to avoid allocation entirely either because of the realtime properties, or because they are cost sensitive and it is worth spending a little bit extra dev effort to be able to save an Euro or two and use a cheaper microcontroller where the allocator overhead won't fit (either the code in flash, or just the bookkeeping in RAM).
Here is the commercial product for which it was designed,
https://reversec.com/usb-armory
A presentation from 2024,
https://www.osfc.io/2024/talks/tamago-bare-metal-go-for-arm-...
You can also see it differently: If the language dictates a 4x increase in memory or CPU usage, you have set a much closer deadline before you need to upgrade the machine or rearchitect your code to become a distributed system by a factor 4 as well.
Previously, delivering a system (likely in C++) that consumed factor 4 fewer resources was an effort that cost developer time at a much higher factor, especially if you had uptime requirements. With Rust and similar low-overhead languages, the ratio changes drastically. It is much cheaper to deliver high-performance solutions that scale to the full capabilities of the hardware.
Rust is not object-oriented.
That makes your statement wrong.
Plenty of OOP architectures can be implemented 1:1 in Rust type system.
Which allowed me to port 1:1 the Raytracing Weekend tutorial from the original OOP design in C++ to Rust.
Also the OOP model used by COM and WinRT ABIs, that Microsoft makes heavy use of in their Rust integration across various Windows and Office components.
Plenty of OOP architecture can be implemented in C. That's an extremely flawed and fuzzy definition. But we've been through this before.
A much saner definition is looking at how languages evolved and how term is used. The way it's used is to describe an inheritance based language. Basically C++ and the descendants.
The primary common ground is that their functions have encapsulation, which is what separates it from functions without encapsulation (i.e. imperative programming). This already has a name: Functional programming.
The issue is that functional, immutable programming language proponents don't like to admit that immutability is not on the same plane as imperative/functional/object-oriented programming. Of course, imperative, functional, and object-oriented language can all be either mutable or immutable, but that seems to evade some.
> SmallTalk
Smalltalk is different. It doesn't use function calling. It uses message passing. This is what object-oriented was originally intended to reference — it not being functional or imperative. In other words, "object-oriented" was coined for Smalltalk, and Smalltalk alone, because of its unique approach — something that really only Objective-C and Ruby have since adopted in a similar way. If you go back and read the original "object-oriented" definition, you'll soon notice it is basically just a Smalltalk laundry list.
> how term is used.
Language evolves, certainly. It is fine for "object-oriented" to mean something else today. The only trouble is that it's not clear to many what to call what was originally known as "object-oriented", etc. That's how we end up in this "no its this", "no its that" nonsense. So, the only question is: What can we agree to call these things that seemly have no name?
IMO, Rust is good for modeling static constraints - ideal when there's multiple teams of varying skill trying to work on the same codebase, as the contracts for components are a lot clearer. Zig is good for expressing system-level constructs efficiently: doing stuff like self-referential/intrusive data structures, cross-platform simd, and memory transformations is a lot easier in Zig than Rust.
Personally, I like Zig more.
[0] https://crates.io/users/kprotty
[1] https://github.com/rust-lang/rust/pull/95801
[2] https://github.com/rust-lang/rust/blob/a63150b9cb14896fc22f9...
Rust is what you want your colleagues to write, to enforce good practices and minimise bugs. It's also what I want my past self to have written, because that guy is always doing things that make my present life harder.
Also, my .zig-cache is currently at 173GB, which causes some issues on the small Linux ARM VPS I test with.
As for upgrades. I upgraded lightpanda to 0.14 then 0.15 and it was fine. I think for lightpanda, the 0.16 changes might not be too bad, with the only potential issue coming from our use of libcurl and our small websocket server (for CDP connections). Those layers are relatively isolated / abstracted, so I'm hopeful.
As a library developer, I've given up following / tracking 0.16. For one, the change don't resonate with me, and for another, it's changing far too fast. I don't think anyone expects 0.16 support in a library right now. I've gotten PRs for my "dev" branches from a few brave souls and everyone seems happy with that arrangement.
I don't use zig. My experience has been that caches themselves are sources of bugs (not talking about zig only, but in general). Clearing all relevant caches occasionally is useful when you're experiencing weird bugs.
Do you see any major problems when you remove your .zig-cache and start over?
I was searching around for causes and came across the following issues: https://github.com/ziglang/zig/issues/15358 which was moved to https://codeberg.org/ziglang/zig/issues/30193
The following quotes stand out
> zig's caching system is designed explicitly so that garbage collection could happen in one process simultaneously while the cache is being used by another process.
> I just ran WizTree to find out why my disk was full, and the zig cache for one project alone was like 140 GB.
> not only the .zig-cache directory in my projects, but the global zig cache directory which is caching various dependencies: I'm finding each week I have to clear both caches to prevent run-away disk space
Like what's going on? This doesn't seem normal at all. I also read somewhere that zig stores every version of your binary as well? Can you shed some light on why it works like this in zigland?
I could make a bet that as of 2026 still more C++ projects are being started than Rust + Zig combined.
World is much more vast than ShowHN and GitHub would indicate.
This puts much more work on the compiler development side, but it's a great boon for the ecosystem.
To be fair, zig is pre 1.0, but Zig is also already 8 years old. Rust turned 1.0 at ~ 5 years, I think.
After many years of insisting that "dialects" of C++ are a terrible idea, despite the reality that most C++ users have a specific dialect they use - Bjarne Stroustrup has endorsed essentially the same thing but as "profiles" to address safety issues. So for people who think there is a "great language" in there perhaps in C++ 29 or C++ 32 you will be able to find out for yourselves that you're wrong.
There is still this disconnection on how languages under ISO process work in the industry.
It's notable that the projects you mentioned mostly don't need to deal with adversarial user input, while the projects I mentioned do. That's one area that Rust shines in.
Android team is quite clear that Java, Kotlin, C and C++ are the official languages for app developers.
Chrome even has less Rust than Firefox.
Linux has some baby adoption, and it isn't without drama, even with Microsoft and Google pushing for it.
Chrome only needs to replace the parts of their codebase that handle untrusted input with Rust to get substantial benefits. Like codec parsers. They don't need to rewrite everything, just the parts that need rewriting. The parts that are impossible to get right in C++, to the point where Chrome spins up separate processes to run that code.
Rust is the future for Android, and it will become an important of Chrome and Linux and git (starting 3.0). That's just the way it is.
- Cranelift applies less optimizations in exchange for faster compilation times, because it was developed to compile WASM (wasmtime), but turns out that is good enough for Rust debug builds.
- Cranelift does not support the wide range of platforms (AFAIK just X86_64 and some ARM targets)
There is a whole ecosystem of contributions across the globe and the lingua franca used by those contributors.
which is slowly changing with wider rust adaptation.
Same for the Metal shading language. C++ adds exactly nothing useful to a shading language over a C dialect that's extended with vector and matrix math types (at least they didn't pick ObjC or Swift though).
...that's just because of the traditional-game-dev Stockholm syndrome towards C++ (but not too much C++ please!).
> Khronos future for Vulkan
As far as I'm aware Khronos is not planning to move the Vulkan API to C++ - and the 'modern C++' sample code which adds a C++ RAII wrapper on top of the Vulkan C API does more harm than good (especially since lifetime management for Vulkan object is a bit more involved than just adding a class wrapper with a destructor).
There is more than one sample using C++, now they make use of C++20, including modules if desired.
It's in line with many other shitty design decisions coming out of Khronos, so I'm not even surprised ;)
IMHO it's a pretty big problem when the spec is on an entirely different abstraction level than the sample code (those new samples also move significant code into 'helper classes', which means all the interesting stuff is hidden away).
Packages do fall behind. We only use a couple, so it's pretty easy to point to an internal fork while we wait for upstream to update or to accept our updates. That'd probably be a pain point if you were using a lot of them.
> Are there cases where packages you may use fall behind the language?
Using third party packages is quite problematic yes. I don't recommend using them too much personally, unless you want to make more work for yourself.
I think one of the more PITA changes necessary to get these projects to 0.15 is removing `usingnamespace`, which I've used to implement a kind of mixin. The projects are all a few thousand LOC and it shouldn't be that much trouble, but enough trouble that none of what I gain from upgrading currently justify doing it. I think that's fine.
I asked him about in a thread a while back: https://news.ycombinator.com/item?id=47206009#47209313
The makers of TigerBeatle also rave about how good Zig is.
Just a degree of success?
Color me extremely sceptical. Surely if you could make javascript fast google would have tried a decade ago....
Surely nobody would use javascript for either yea? The weaknesses of the language are amplified in constrained environments: low certainty, high memory pressure, high startup costs.
It's probably the most popular language for serverless.
Anyway, their decision to implement a TUI was definitely not done out of laziness nor even pragmatism. It was a fashion choice. A deliberate choice to put their product in the same vibes-space as console jockey hotshot unix pros who spit out arcane one liners to get shit done. They very easily could have asked claude to write itself a proper GUI interface which completely avoids all the pitfalls of TUIs and simplifies a lot of things they went out of their way to make work in a TUI. Support for drag-and-drop for instance, isn't something you'll find in many TUIs but they have it. They put care into making this TUI, the problem is that TUIs are kind of shit, and they certainly know that. They did it this way anyway effectively for marketting reasons.
This makes me feel that the underlying technology behind Zig is solid.
But I prefer Rust over Zig. The main difference is Rust chooses a "closed world" model while Zig chooses an "open world" model: in Rust, you must explicitly implement a trait while in Zig as long as the shape fits, or the `.` on a structure member exists (for whichever type you pass in), it will work (I don't use Zig so pardon hand wavy description).
This gives Zig very powerful meta programming abilities but is a pain because you don't know what kind of type "shapes" will be used in a particular piece of code. Zig is similar to C++ templates in some respects.
This has a ripple effect everywhere. Rust generated documentation is very rich and explicit about what functions a structure supports (as each trait is explicitly enrolled and implemented). In Zig the dynamic nature of the code becomes a problem with autocomplete, documentation, LSP support, ...
Do you happen to have a more specific example by any chance? I’d be interested in what this looks like in practice, because what you described sounds a bit like Go interfaces and from my understanding of Zig, there’s no direct equivalent to it, other than variations of fieldParentPtr.
It's an extremely useful thing, but unconstrained, it's essentially duck typing during compile time. People has been wanting some kind of trait/interface support to constrain it, but it's unlikely to happen.
I quite like it when writing C++ code. Makes it dead easy to write code like `min` that works for any type in a generic way. It is, however, arguably the main culprit behind C++s terrible compiler-errors, because you'll have standard library functions which have like a stack of fifteen generic calls, and it fails really deeply on some obscure inner thing which has some kind of type requirement, and it's really hard to trace back what you actually did wrong.
In my (quite limited) experience, Zig largely avoids this by having a MUCH simpler type system than C++, and the standard library written by a sane person. Zig seems "best of both worlds" in this regard.
In that world, things like global variables are perfectly fine. But then we got first preemptive scheduling and threads, then actual multicore CPUs, so global variables became really dangerous. Thread locals are the escape hatch that carried these patterns into the 21st century, for better or worse.
One thing that tends to be overlooked when discussing changes is the ecosystem effect of frequent changes.
A language that breaks frequently doesn't just impose upgrade work on apps, but also discourages the creation of long-lived libraries and tools. Anything that sits between the language and the user (linters, bindings, frameworks, teaching material, tutorials etc) has to to some degree "chase the language"
This means that the ecosystem will skew toward very actively maintained libraries and away from "write once then leave it alone" libs. And this the trade-off is reasonable during early language design, but it's worth acknowledging that it has real consequences for ecosystem growth.
One should note that other newer languages have put significant effort into minimizing this churn, precisely to allow the latter type of ecosystem to also form. So it's kind of an experiment, and it will be interesting to see which approach ends up producing the larger ecosystem over time.
Yet, someone has to do them. Ideally it is the creator of the addon, sometimes it's the users who do it, when the addon is not maintained anymore (in case of trivial changes).
It kinda works that way, but it also is some kind of gamble for the user. When you see a new addon (and a new addon developer), you can't know if they gonna stick to it or not.
If you have to pay for the addon, it's more likely they maintain it, of course. But also not a guarantee.
But in the world of desktop development it's possible for a library to be "done", having a 100% stable codebase going forward and requiring no maintenance. And it's not bad, it's actually good.
Requiring every dependency to be constantly maintained is a massive drain on productivity.
N.B.: Coffee hasn't reached my bloodstream yet; accuracy not guaranteed.
That's not true. They use ProcessPrng since versions earlier than 10 are no longer supported (well, rust also has a windows 7 target but that couldn't use ProcessPrng anyway since it wasn't available). The issue they linked is from a decade ago. E.g. here's Chromium: https://github.com/chromium/chromium/blob/dc7016d1ef67e3e128...
> If [ProcessPrng] fails it returns NO_MEMORY in a BOOL (documented behavior is to never fail, and always return TRUE).
From Windows 10 onward ProcessPrng will never fail. There's a whitepaper that gives the justification for this (https://aka.ms/win10rng):
> We also have the property that a request for random bytes never fails. In the past our RNG functions could return an error code. We have observed that there are many callers that never check for the error code, even if they are generating cryptographic key material. This can lead to serious security vulnerabilities if an attacker manages to create a situation in which the RNG infrastructure returns an error. For that reason, the Win10 RNG infrastructure will never return an error code and always produce high-quality random bytes for every request...
> For each user-mode process, we have a (buffered) base PRNG maintained by BCryptPrimitives.dll. When this DLL loads it requests a random seed from kernel mode (where it is produced by the per-CPU states) and seeds the process base PRNG. If this were to fail, BCryptPrimitive.dll fails to load, which in most cases causes the process to terminate. This behavior ensures that we never have to return an error code from the RNG system.
I'm DevOps writing boring Python microservices for €. I have no CS background and never did systems programming. However, writing Python always bothered me because there are so many layers between you and what's happening on the metal. For me, Django is the peak example of this, to me it feels almost like doing no code. It makes me very uncomfortable writing it.
Then I heard about this new programming language Zig on YouTube and I just gave it a try. After using it for a few months, I really like it. I guess mostly because it is so explicit.
It is almost like the language encourages you to think in terms of system design. Zig offers a lot of freedom so you can design the perfect tool for your problem. And somehow, it feels very effective for it. I think it is a blessing that there are few third party libraries for the same reason.
For example. I am working on a tool to parse CIM (some XML standard). If I had to use Python for this, my solution would probably use the most popular xml parsing library and then go from there. Yawn.
Instead, with Zig I started to think with a very fresh mind about the problem. I started thinking more from the first principles of the problem. And I got very excited again about programming. During my swimming practice or biking, I kept thinking about the design and how I can make it simpler and improving it by simply not doing certain busy work. I can't fully explain it. But the language gets you in that mindset.
Maybe other system languages also offer this experience, Zig (marketing?) just happened to cross my paths at the right moment.
There are similar hidden quirks in the language that will need to be addressed at some point, such as integer promotion semantics.
To address the question about stability: the Zig community are already used to Zig breaking between 0.x versions. Unlike competitors such as Odin or my own C3, there is no expectation that Zig is trying to minimize upgrading problems.
This is a cultural thing, it would be no real problem to be clear about deprecations, but in the Zig community it’s simply not valued. In fact it’s a source of pride to be able to adapt as fast as possible to the new changes.
I like to talk about expectation management, and this is a great example of it.
In discussions, it is often falsely argued that ”Zig is not 1.0 so breaks are expected” in order to motivate the frequent breaks. However, there are degrees to how you handle breaks, and Zig is clearly in the ”we don’t care to reduce the work”-camp.
If someone is trying to get a more objective look at the Zig upgrade path, then it’s worth keeping in mind that the tradition in Zig is to offload all the work on the user.
The argument, which is frequently voiced, is that ”breaking things will make the language get better and so it’s good that there are language breaks”
It is certainly true that breaking changes are needed, but most people outside of the Zig community would expect it to be done with more care (deprecation paths etc)
Secondly, it should perhaps be a concern for Zig, now at 10 years old, to still produce solidly breaking code every half year.
10 years is the common point where languages go 1.0. However, the outlook for a Zig 1.0 is bleak from what I gather from Zig social forums: the most optimistic estimate I’ve heard is 2029 for 1.0.
This means that in the future, projects using Zig can still expect any libraries and applications to bitrot quickly if they are not constantly maintained.
Putting this in contrast with Odin (9 years old) which is essentially 1.0 already and has been stable for several years.
Maybe this also explains the difference in actual output. For example the number of games I know of written in Odin is somewhere between 5 to 10 times as many as Zig games. Now weighing in that Zig has maybe 5 or 10 times as many users, it means Odin users are somewhere between 20-100 times as likely to have written a playable game.
There are several explanations as to why this is: we could discuss whether the availability of SDL, Raylib etc is easier on Odin (then why is Zig less friendly?), maybe more Odin has better programmers (then why do better programmers choose Odin over Zig), maybe it’s just easier to write resource intensive applications with Odin than Zig (then what do we make of Zig’s claim of optimality?)
If we look past the excuses made for Zig (”it’s easy to fix breaks” ”it’s not 1.0”) and the hype (”Zig is much safer than C” ”Zig makes me so productive”) and compare with Odin in actual productivity, stability and compilation speed (neither C3 nor Odin requires 100s of GB of cache to compile in less than a second using LLVM) then Zig is not looking particularly good.
Even things like build.zig, often touted as a great thing, is making it really hard for a Zig beginner (”to build your first Hello World, first understand this build script in non-trivial Zig”). Then for IDEs, suddenly something like just reading the configuration of what is going to be used for building is hidden behind an opaque Zig script. These trade-offs are rarely talked about, as both criticism and hype is usually based on surface rather than depth.
Well, that’s long enough of a comment.
To round it off I’d like to end on a positive note: I find the Zig community nice and welcoming. So if you’re trying Zig out (and better do that, don’t let others’ opinions - including mine - prevent you from trying things out) do so.
If you want to evaluate Zig against competitors, I’d recommend comparing it to D, Odin, Jai and C3.
Not at all, if the team needs 30 more years they should take it.
> However, the outlook for a Zig 1.0 is bleak from what I gather from Zig social forums: the most optimistic estimate I’ve heard is 2029 for 1.0.
Funny you see it as bleak when most of the community sees it as the most excitinh thing in systems programming happening right now.
I think you comment is in bad faith, all the big zig projects say that the upgrade path is never a main concern, just read HN comments here or on other zig threads, people ask about this a lot and maintains always answer.
Yes, I understand that is the opinion in the Zig community. As an outsider, it seems odd to me to pick a language that I constantly need to maintain.
>> However, the outlook for a Zig 1.0 is bleak from what I gather from Zig social forums: the most optimistic estimate I’ve heard is 2029 for 1.0.
> Funny you see it as bleak when most of the community sees it as the most excitinh thing in systems programming happening right now.
You misread that one. I was talking about the odds of seeing a 1.0 version of Zig soon.
> I think you comment is in bad faith, all the big zig projects say that the upgrade path is never a main concern, just read HN comments here or on other zig threads, people ask about this a lot and maintains always answer.
Maybe you didn't read what I wrote carefully enough. This is part of the protectiveness from the Zig community that prompted me to write in the first place.
WITHIN the Zig community it is deemed acceptable for Zig upgrades to break code. Consequently it becomes simple survivor bias that people who use Zig for larger projects don't think that this is a major concern BECAUSE IF THEY FELT IT WAS A CONCERN THEY WOULD NOT USE ZIG.
Whether programmers at large feel that this is a problem is an unknown still, since Zig has not yet reached to point of general adoption (when people use Zig because they have to, rather than because they want to).
However, it is INCORRECT to state that just because a language is not yet 1.0 it needs to break older code aggressively without deprecation paths. As an example, Odin removed the old `os` module and replaced it with the new "os2". This break was announced half a year in advance and lots of thought was put into reducing work for developers: https://odin-lang.org/news/moving-towards-a-new-core-os/
In the case of C3, breaking changes only happen once a year with stdlib going through the general process of deprecating functions long before removing them.
I wanted to highlight how these are quite different approaches. For established languages, this is of course even more rigorous, but neither C3 nor Odin are 1.0, and still see this as valuable and their communities then end up expecting it.
So please understand that when you say "it's never a main[sic] concern", this is simple survivor bias.
Honestly it’s kind of narrow-minded not to appreciate how different its approach to low-level programming is. You either see it or you don’t.
C3 is basically just C with a few extra bells and whistles. Still the same old C. Why would I use that when Zig exists? Actually never mind, don’t bother answering.
And the doomer posts are so predictable. That’s usually when you know Zig is doing something right.
I understand both of the following:
1. Language development is a tricky subject, in general, but especially for those languages looking for wide adoption or hoping for ‘generational’ (program life span being measured in multiple decades) usage in infrastructure, etc.
2) Zig is a young-ish language, not at 1.0, and explicitly evolving as of the posting of TFA
With those points as caveats, I find the casualness of the following (from the codeburg post linked on the devlog) surprising:
‘’’This branch changes the semantics of "uninstantiable" types (things like noreturn, that is, types which contain no values). I wasn't originally planning to do this here, but matching the semantics of master was pretty difficult because the existing semantics don't make much sense.’’’
I don’t know Zig’s particular strategy and terminology for language and compiler development, but I would assume the usage of ‘branch’ here implies this is not a change fully/formally adopted by the language but more a fully implemented proposal. Even if it is just a proposal for change, the large scale of the rewrite and clear implication that the author expects it to be well received strikes me as uncommon confidence. Changing the semantics of a language with any production use is nearly definitionally MAJOR, to just blithely state your PR changes semantics and proceed with no deep discussion (which could have previously happened, IDK) or serious justification or statements concerning the limited effect of those changes is not something I have experienced watching the evolution (or de-evolution) of other less ‘serious’ languages.
Is this a “this dev” thing, a Zig thing, or am just out of touch with modern language (or even larger scale development) projects?
Also, not particularly important or really significant to the overall thrust of TFA, but the author uses the phrase “modern Zig”, which given Zig’s age and seeming rate of change currently struck me as a very funny turn of phrase.
* The author is being flippant and not taking the situation seriously enough.
* The author is presuming a high-trust audience that knows that they have done all the due diligence and don't have to restate all of that.
In this case, it's a devlog (i.e. not a "marketing post") for a language that isn't at 1.0 yet. A certain amount of "if you're here, you probably have some background" is probably reasonable.
The post does link directly to the PR and the PR has a lot more context that clearly conveys the author knows what they are doing.
It is weird reading about (minor) breaking language changes sort of mentioned in passing. We're used to languages being extremely stable. But Zig isn't 1.0 yet. Andrew and friends certainly take user stability seriously, but you signed up for a certain amount of breakage if you pick the language today.
As someone who maintains a post-1.0 language, there really is a lot of value in breaking changes like this. It's good to fix things while your userbase is small. It's maddening to have to live with obvious warts in the language simply because the userbase got too big for you to feasibly fix it, even when all the users wish you could fix it too. (Witness: The broken precedence of bitwise operators in C.)
It's better for all future users to get the language as clean and solid as you can while it's still malleable.
This was proposed, discussed, and accepted here: https://github.com/ziglang/zig/issues/3257
Later, Matthew Lugg made a follow-up proposal, which was discussed both publicly and in ZSF core team meetings. https://github.com/ziglang/zig/issues/15909
He writes:
> A (fairly uncontroversial) subset of this behavior was implemented in [the changeset we are discussing]. I'll close this for now, though I'll probably end up revisiting these semantics more precisely at some point, in which case I'll open a new issue on Codeberg.
I don't know how evident this is to the casual HN reader, but to me this changeset very obviously moves Zig the language from experimental territory a large degree towards being formally specified, because it makes type resolution a Directed Acyclic Graph. Just look at how many bugs it resolved to get a feel for it. This changeset alone will make the next release of the compiler significantly more robust.
Now, I like talking about its design and development, but all that being said, Zig project does not aim for full transparency. It says right there in the README:
> Zig is Free and Open Source Software. We welcome bug reports and patches from everyone. However, keep in mind that Zig governance is BDFN (Benevolent Dictator For Now) which means that Andrew Kelley has final say on the design and implementation of everything.
It's up to you to decide whether the language and project are in trustworthy hands. I can tell you this much: we (the dev team) have a strong vision and we care deeply about the project, both to fulfill our own dreams as well as those of our esteemed users whom we serve[1]. Furthermore, as a 501(c)(3) non-profit we have no motive to enshittify.
[1]: https://ziglang.org/documentation/master/#Zen
It's been incredible working with Matthew. I hope I can have the pleasure to continue to call him my colleague for many years to come.
This stuff is foundational and so it's certainly a priority to get it right (which C++ didn't and will be paying for until it finally collapses under its own weight) but it's easier to follow as an outsider when people use conventional terminology.
struct Goose;
let x = Goose; // The variable x has type Goose, but also value Goose, the only value of that type
The choice to underscore that Rust's unit types have size zero is to contrast with languages like C or C++ where these types, which don't need representing, must nevertheless take up a whole byte of storage and it's just wasted.But what we're talking about here are empty types. In Rust we'd write this:
enum Donkey {}
// We can't make any variables with the Donkey type, because there are no values of this type and a variable needs a valueRust's core::convert::Infallible is such a type, representing in that case the error type for conversions which have no errors. For example, we can try to convert most numeric types into a 16-bit unsigned type, and obviously most of them can fail because your value was too big or negative or whatever. u16 however obviously never fails, the conversion is a no-op, nevertheless it would be stupid if we can't write generic code to convert it - so of course we can, and if we wrote generic error handling code, that code is dead for the u16 case, we can't fail, the use of empty types here justifies the compiler saying OK, that code is dead, don't emit it. Likewise converting u8 to u16 can't fail - although it's slightly more than a no-op in some sense - and so again the error handling is dead code.
io.concurrent(foo, .{});
where foo's return type is `error{foobar}!noreturn`, because the compiler crashes when it tries to use that type as a std.Io.Future(T)'s struct field. Might be related or not.I enjoy all of the process and implementation content, i.e. videos, podcast, and blogs that yourself and contributors have provided through various platforms over the years. I made a comment from a relatively u informed place and it seems to have been taken as a negative remark on Zig or the language and compiler’s development. Apologies if it was seen that way yourself or other contributors.
As for Matthew, the work both planning and implementing these changes was clearly a large undertaking and I certainly applaud that and hope no one thought otherwise.
For a concrete example, while testing this branch, I tried building ZLS (https://github.com/zigtools/zls/). To do that, the only change I had to make was changing `.{}` to `.empty` in a couple of its dependencies (i.e. not even in ZLS itself!). This was needed because I removed some default values from `std.ArrayList` (so the change was in standard library code rather than the language). Those default values had actually already been deprecated (with intent to remove) for around a year, so this wasn't exactly a new change either.
As another example, Andrew has updated Awebo (https://codeberg.org/awebo-chat/awebo), a text and voice chat application, to the new version of Zig. Across Awebo's entire dependency tree (which includes various packages for graphics, audio, and probably some other stuff), the full set of necessary changes was:
* Same as above, change `.{}` to `.empty` in a few places, due to removal of deprecated defaults
* Add one extra `comptime` annotation to logic which was constructing an array at comptime
* Append `orelse @alignOf(T)` onto an expression to deal with a newly-possible `null` case
These are all trivial fixes which Zig developers would be able to do pretty much on autopilot upon seeing the compile errors.
So, while there were a handful of small breaking changes, they don't seem to me like a particularly big deal (for a language where some level of breakage is still allowed). The main thing this PR achieved was instead a combination of bugfixes, and enhancements to existing features (particularly incremental compilation).
I noticed the following comment was added to lib/std/multi_array_list.zig [0] with this change:
How could relying on `@alignOf(T)` in the definition of `MultiArrayList(T)` cause a loop? Even with `T` itself being a MultiArrayList, surely that is a fully distinct, monomorphized type? I expect I am missing something obvious.[0]: https://codeberg.org/ziglang/zig/pulls/31403/files#diff-a6fc...
> i had to change the bytes field from [*]align(@alignOf(T)) u8 to just [*]u8 (and cast the alignment back in the like one place that field is accessed). this wasn't necessary for MultiArrayList in and of itself, but it was necessary for embedding a MultiArrayList(T) inside of T without a dependency loop, like
[0]: https://zsf.zulipchat.com/#narrow/channel/454360-compiler/to...