Jargon terms like "sum types" or "affine types" may seem complicated, but when you see it's actually "enums with data fields", it makes so much sense, and prevents plenty of state-related bugs.
Proposed "effects" mean that when you're writing an iterator or a stream, and need to handle error or await somewhere in the chain, you won't suddenly have a puzzle how to replace all of the functions in the entire chain and your call stack with their async or fallible equivalents.
"linear types" means that Rust will be able to have more control over destruction and lifetime of objects beyond sync call stack, so the tokio::spawn() (the "Rust async sucks" function) won't have to be complaining endlessly about lifetimes whenever you use a local variable.
I can't vouch for the specifics of the proposed features (they have tricky to design details), but it's not simply Rust getting more complex, but rather Rust trying to solve and simplify more problems, with robust and generalizable language features, rather than ad-hoc special cases. When it works it makes the language more uniform overall and gives a lot of bang for the buck in terms of complexity vs problems solved.
Also the ecosystem is setup so you have to use tokio and everything has to be an Arc.
If you don't care about embedded that is fine. But almost all systems in the world are embedded. "Normal" computers are the odd ones out. Every "normal" computer has several embedded systems in it (one or more of SSD controller, NIC, WiFi controller, celular modem, embedded controller, etc). And then cars, appliances, cameras, routers, toys, etc have many more.
It is a use case that matters. To have secure and reliable embedded systems is important to humanity's future. We need to turn the trend of major security vulnerabilities and buggy software in general around. Rust is part of that story.
I've implemented several stackfull and stackless async engines from scratch. When I started out I had a naive bias toward stackfull but over time have come to appreciate that stackless is the correct model even if it seems more complicated to use.
That said, I don't know why everyone uses runtimes like tokio for async. If performance is your objective then not designing and writing your own scheduler misses the point.
...Minus the various tradeoffs that made stackful coroutines a nonstarter for Rust's priorities. For example, Rust wanted:
- Tight control over memory use (no required heap allocation, so segmented stacks are out)
- No runtime (so no stack copying and/or pointer rewriting)
- Transparent/zero-cost interop over C FFI (i.e., no need to copy a coroutine stack to something C-compatible when calling out to FFI)
I don't understand what kind of use case they were optimizing for when they designed this system. Don't think they were optimizing only for embedded or similar applications where they don't use a runtime at all.
Using stackfull coroutines, having a trait in std for runtimes and passing that trait around into async functions would be much better in my opinion instead of having the compiler transform entire functions and having more and more and more complexity layered on top of it solve the complexities that this decision created.
In the case of Rust's async design, the answer is that that simply isn't a problem when your design was intentionally chosen to not require allocation in the first place.
> And pretty much everything in rust async is put into an Arc.
IIRC that's more a tokio thing than a Rust async thing in general. Parts of the ecosystem that use a different runtime (e.g., IIRC embassy in embedded) don't face the same requirements.
I think it would be nice if there were less reliance on specific executors in general, though.
> Don't think they were optimizing only for embedded or similar applications where they don't use a runtime at all.
I would say less that the Rust devs were optimizing for such a use case and more that they didn't want to preclude such a use case.
> having a trait in std for runtimes and passing that trait around into async functions
Yes, the lack of some way to abstract over/otherwise avoid locking oneself into specific runtimes is a known pain point that seems to be progressing at a frustratingly slow rate.
I could have sworn that that was supposed to be one of the improvements to be worked on after the initial MVP landed in the 2018 edition, but I can't seem to find a supporting blog post so I'm not sure I'm getting this confused with the myriad other sharp edges Rust's sync design has.
> IIRC that's more a tokio thing than a Rust async thing in general. Parts of the ecosystem that use a different runtime (e.g., IIRC embassy in embedded) don't face the same requirements.
Well, if you're implementing an async rust executor, the current async system gives you exactly 2 choices:
1) Implement the `Wake` trait, which requires `Arc` [1], or
2) Create your own `RawWaker` and `RawWakerVTable` instances, which are gobsmackingly unsafe, including `void*` pointers and DIY vtables [2]
It's clear that you don't understand the use cases that async in Rust was designed to accommodate.
> Also the ecosystem is setup so you have to use tokio and everything has to be an Arc.
It's clear that you're not at all familiar with the Rust async ecosystem, including Embassy.
Also don’t understand why you would use rust for embedded instead of c
For example, your section on effects:
> Functions which guarantee they do not unwind (absence of the panic effect)
* I actually don’t see how this is any more beneficial than the existing no_panic macro https://docs.rs/no-panic/latest/no_panic/
> Functions which guarantee they terminate (absence of the div effect)
> Functions which are guaranteed to be deterministic (absence of the ndet effect)
> Functions which are guaranteed to not call host APIs (absence of the io effect)
The vast majority of rust programs don’t need such validation. And for those that do, the Ferrocene project is maintaining a downstream fork of the compiler where this kind of feature would be more appropriate.
I think rust is in a perfect spot right now. Covers 99.99% of use cases and adding more syntax/functionality for 0.001% of users is only going to make the language worse. The compiler itself provides a powerful api via build.rs and proc macros which let downstream maintainers build their desired customization.
The vast majority of Rust programs would benefit from such validation existing in the language, even if they never bother to explicitly leverage it themselves. For example, a non-panicking effect would be beneficial for both compilation times (don't bother pessimistically emitting unwinding glue for the enormous number of functions that don't need it, only to attempt to optimize it away later) and runtime performance (the guaranteed absence of the aforementioned glue reduces potential branch points and code size, which leads to better inlining and vectorization).
The thing is, some of these things are very useful in specific domains, and all these domains are closely related to the ideas of safety. Nondeterminism and IO are important for purity/referential transparency, which is a fairly important effect for business logic IMO. Guaranteed termination matters for formal verification. Unwind removal matters for embedded. I don't think wishing for these things is really all that unwarranted
> I actually don’t see how this is any more beneficial than the existing no_panic macro
no_panic and similar macros are doing a very hacky workaround which isn't really a great static guarantee. The simple fact that building with panic = abort makes the macro useless is an annoyance in and of itself. dtolnay did great when figuring out some path forward but it's somewhat shaky
That being said, I'm not at all happy with all the complexity and ecosystem fragmentation that async brought. I understand what you're saying. But surprise panics is a bit of a pain point for me.
Author here. Yes, it is. It was literally made for libraries. Notably https://github.com/dtolnay/zmij and https://github.com/dtolnay/itoa use it to enforce that the libraries' public API is absent of panicking.
Just not allowing complex code is so much better than this.
Just even being able to look at a piece of code and trace what it is doing is 1000x more valuable than this. I regretted almost every time I allowed Traits/generics into a codebase
I think looking at the caveats listed in the no_panic docs should give you some ideas as to how a "proper" no_panic effect could improve on the macro.
Furthermore, a "proper" effect system should make working with effects nicer in general - for instance, right now writing functions that work independently of effects is not particularly ergonomic.
> The vast majority of rust programs don’t need such validation.
I think you also need to consider the niches which Rust wants to target. Rust is intended to be usable for very low-level/foundational/etc. niches where being able to track such effects is handy, if not outright required, so adding such support would be unblocking Rust for use in places the devs want it to be usable in.
> And for those that do, the Ferrocene project is maintaining a downstream fork of the compiler where this kind of feature would be more appropriate.
Given this bit from the Ferrocene website:
> Ferrocene is downstream from Rust
> It works with existing Rust infrastructure and the only changes made in the code were to cover testing requirements of ISO 26262, IEC 61508 and IEC 62304 qualification. All fixes are reported upstream for constant improvement.
I would suspect that such changes would be out of scope for the Ferrocene fork because that fork is more intended to be a qualified/certified Rust more than Rust + completely novel extensions.
> The compiler itself provides a powerful api via build.rs and proc macros which let downstream maintainers build their desired customization.
Given the complexity of the features listed this feels tantamount to asking each individual consumer to make their own fork which doesn't seem very likely to attract much interest. IIRC async even started off like that (i.e., using a macro), but that was painful enough and async thought to be useful enough to be promoted to a language feature.
I'm curious to what extent one can implement the described features using just build.rs/proc macros in the first place without effectively writing a new compiler.
I'm not sure what the right answer for Rust is, but I'm fairly convinced that these type system ideas are the future of programming languages.
Perhaps it can be added to rust in a reasonable and consistent way that doesn't ultimately feel like a kludgy language post-hoc bolt on. Time will tell. There is a serious risk to getting it wrong and making the language simply more complicated for no gain.
But, these ideas are really not obscure academic stuff. This is where programming language design is at. This moment is like talking about sum-types in the 2010s. These days that concept is normalized and expected in modern programming languages. But, that's a fairly recent development.
I suspect that Linear types, refinement types, etc will follow a similar trajectory. Whether new ideas like this can be reasonably added to existing languages in a good way is the age old question.
Hopefully Rust makes good choices on that road.
I think the Rust team/community is well-aware of this. Which is why Rust has such a well-defined RFC life-cycle.
At the other end, one of the biggest complaints about Rust is that many features seem eternally locked behind nightly feature gates.
I don't personally have a solution to propose to this problem. I generally appreciate their caution and long-term considering. It's refreshing coming from C++. I suppose one could argue that they've overcorrected in the other direction. Unclear.
Deeper than that, I think there's a philosophical dispute on whether languages should or shouldn't even evolve. There are people with C-stability type thinking that would argue that long-term stability is so important that we should stop making changes and etch things into stone. There is some merit to that (a lot of unhelpful churn in modern programming). But, failure to modernize is eventually death IMHO. I think C is slowly dying because of exactly this. It will take quite a while because it is critical computing infrastructure. But, few people remain that defend it's viability. The arguments that remain are of the form "we simply don't have a viable replacement yet".
Perhaps you can even take the view that this is the lifecycle of programming languages. They're not supposed to live forever. That could be a reasonable take. But then you really have to confront the problem of code migration from old languages to new languages. That is a very very hard unsolved problem (e.g. see: COBOL).
Language evolution is foundationally a hard problem. And I'm not unhappy with Rust's approach. I think no one has managed to find an ideal approach.
You can go all the way to formal verification. This is not enough for that. Or you can stop at the point all memory error holes have been plugged. That's more useful.
You can go way overboard with templates/macros/traits/generics. Remember C++ and Boost. I understand that Boost is now deprecated.
I should work some more on my solution to the back-reference problem in Rust. The general idea is that Rc/Weak/upgrade/downgrade provide enough expressive power for back references, but the ergonomics are awful. That could be fixed, and some of the checking moved to compile time for the single owner/multiple users case.
You can go overboard on any language concept imaginable, but conflating all these mechanisms makes it sound like you haven't interacted much with non-C++ languages—particularly since rust doesn't have templates or anything like templates, traits are an entirely unrelated composition mechanism, and macros are entirely unrelated to the type discussion in the article.
This isn't really "advanced type theory" so much as picking up programming language developments from the 90s. I suppose it's "advanced" in the sense that it's a proper type system and not a glorified macro ala templating, but how is that a bad thing?
But who knows, maybe the "academic brilliance" from the article is more pragmatic than I give it credit for. I sure hope for it if these changes ever go through.
"View types" are interesting. But how much of that generality is really needed? We already have it for arrays, with "split_at_mut" and its friends. That's a library function which uses "unsafe" but exports a safe interface. The compiler will already let you pass two different fields of a struct as "&mut". That covers the two most pressing cases.
Rust is actually really unique among imperative languages in its general composability - things just compose really well across most language features.
The big missing pieces for composability are higher-kinded types (where you could be generic over Option, Result, etc.), and effects (where you could be generic over async-fn, const-fn, hypothetical nopanic-fn, etc.)
The former becomes obvious with the amount of interface duplication between types like Option and Result. The latter becomes obvious with the number of variants of certain functions that essentially do the same thing but with a different color.
Can you compare it to some other imperative language? Because I really don't see anything particularly notable in Rust that would give it this property.
No need for ternary operators. C# unsafe blocks can only appear as statements (so you cannot delegate from a safe to an unsafe constructor, e.g.). C++ cannot return from the middle of an expression.
A related aspect is the type system, which composes with expressions in really interesting ways, so things like constant array sizes can be inferred.
Part of the problems is that the "things just compose really well" point becomes gradually less and less applicable as you involve the lower-level features Rust must be concerned with. Abstractions start to become very leaky and it's not clear how best to patch things up without a large increase in complexity. A lot of foundational PL research is needed to address this sensibly.
I truly don't understand. If you don't want rust to become complex, you don't want it to "develop" fast anyways. Unless you mean you think it will be slower to write code?
> And if it creates a cultural schism between full commitment and pragmatic approaches, it's also trouble.
Zero clue what this is supposed to mean. WTF is "full commitment" here?
> Remember Scala?
Scala, haskell, and others are high level languages in "academic terms." They have high levels of abstraction. The proposals are the opposite of high level abstractions, they instead formalize very important low level properties of code. If anything they decrease abstraction.
I think a lot of things taken for granted these days were considered "too complicated" some time ago: think of how widespread pattern matching, closures, generics, or functional idioms in imperative languages are, and compare to e.g. Java 1.0.
My feeling is that the "acceptable level of complexity" for programming languages goes up over time, so probably stuff like effect types will be almost everywhere in another 10 years.
Currently building out clr, which uses a heuristic (not formal verification) method for checking soundness of zig code, using ~"refinement types". In principle one could build a more formal version of what I'm doing.
Also the way nowadays is with constexpr, templates, and around the corner, static reflection.
Maybe but:
- Move fixes Pin
- Linear types, prevent memory leaks
- potentially effects simplify so many things
Each of these functionalities unlock capabilities people have complained about Rust. Namely async, gen blocks, memory leaks.
Huh?? Boost is used basically everywhere.
There are many reasons for this. Boost has uneven quality. Many of the best bits end up in the C++ standard. New versions sometimes introduce breaking changes. Recent versions of C++ added core language features that make many Boost library features trivial and clean to implement yourself. Boost can make builds much less pleasant. Boost comes with a lot of baggage.
Boost was a solution for when template metaprogramming in C++ was an arcane unmaintainable mess. Since then, C++ has intentionally made massive strides toward supporting template metaprogramming as a core feature that is qualitatively cleaner and more maintainable. You don’t really need a library like Boost to abstract those capabilities for the sake of your sanity.
If you are using C++20 or later, there isn’t much of a justification for using Boost these days.
A lot of the new stuff that gets added into boost these days is basically junk that people contribute because they want some kind of resume padding but that very few people actually use. Often times people just dump their library into boost and then never bother to maintain it thereafter.
People undoubtedly thought going for Affine types was too much, and even simple things like null safety or enums-with-values and the prevalence of Result saw debate with minimalists voicing concerns.
A world where you could write a Rust program that is memory leak free with Affine types is one I want to live in. Haskell can do it now, but its just not easy and Rust has beat out Haskell with its mix of ML-strength types and practicality.
IMO these changes maintain Rusts winning mix of academia and practicality. Heres a proof point — dependent types weren't mentioned :)
This description is also a good crystallization of why one would want linear types
- data accessed by multiple cores and interrupt handlers must be modified under a spin lock and with interrupts disabled
- data accessed by multiple cores but not interrupt handlers only needs the spin lock
- data accessed by one core but maybe interrupt handlers only needs to pay for disabling interrupts
Depending on your core and how performance sensitive the code is, the costs of the above can vary significantly. It would be nice to encode these rules in the type system.
(Ordered types might be useful for “critical sections” — that is, areas where interrupts are disabled and the interrupt disablement guard absolutely must be dropped in order.)
I had some Scala 3 feelings when reading the vision, I hope Rust doesn't gets too pushy with type systems ideas.
That is how we end with other ecosystems doubling down in automatic memory management with a good enough ownership model for low level coding, e.g. Swift 6, OxCaml, Chapel, D, Linear Haskel, OCaml effects,...
Where the goal is that those features are to be used by experts, and everyone else stays on the confort zone.
I don't know if it is true or not, but my feeling is that Scala brought a lot of new ideas. But as I read somewhere, "Scala was written by compiler people, to write compilers", and I can understand that feeling.
Kotlin came after Scala (I think?) and seems to have gotten a lot of inspiration from Scala. But somehow Kotlin managed to stay "not too complex", unlike Scala.
All that to say, Rust has been innovating in the zero-cost abstraction memory safe field. If it went the way of Scala, I wonder if another language could be "the Kotlin of Rust"? Or is that Zig already? (I have no idea about Zig)
It's not really true anymore, Kotlin has slowly absorbed most of the same features and ideas even though they're sometimes pretty half-baked, and it's even less principled than the current Scala ecosystem. JetBrains also wants to make Kotlin target every platform under the sun.
At this point, the only notable difference are HKTs and Scala's metaprogramming abilities. Kotlin stuck to a compiler plugin exposing a standard interface (kotlinx.serialization) for compile-time codegen. Scala can do things like deriving an HTTP client from an OpenAPI specification on the fly, by the LSP backend.
So did Scala long before. It's just that Kotlin got a lot more traction for different reasons.
I don't understand this. You can run any pure Java jar on Android, pretty sure you can do that with Scala too? It's not exactly a "different platform" in terms of programming language. Sure it needs tooling and specific libraries, but that's higher level than the programming language.
Jetbrains is doing interop with Swift (Kotlin -> ObjC -> Swift and more recently Kotlin -> C -> Swift), which Scala never did. But I don't really see how this is relevant in this conversation.
For instance the Android runtime has chronically lagged behind mainline JVM bytecode versions, iirc once Scala started to emit Java 8 bytecode, Android was stuck on Java 6.
Kotlin had other obvious advantages on Android like its thin standard library or the inlining of higher-order functions.
Linux kernel adoption of Rust hasn't been a smooth ride, exactly because of its type system among C folks.
It is only happening because the likes of Google and Microsoft want to see it through.
Apple seems to think differently.
> and interestingly some projects like Ladybird have moved away from Swift towards Rust.
Not really interesting after you spent some time tracking lifecycle of FOSS projects. I don't think it is a last "moving away" announcement we get from Ladybird.
It is all over the place in Swift documentation, WWDC sessions, and even last week on Meet the Team session, regarding on how to write safe systems programming code on Apple platforms.
Ladybird should focus on what language they actually want to deliver something.
If for anything, Rust isn't married to C as Scala is to Java.
To paraphrase former Scala developer and present poker player, "If Java cut its nose, would you Scala... Oh god, stop! The blood, the blood!"
Everyone who wants to talk to low-level code has to have C ABI. The equivalent Scalaism in Rust would be if Rust reimplemented the C-inspired java.util.Date. Yes. Monday should be 0, not an enum. Because C did it.
Some of the terminology is just unfortunate. For example, I have an intuitive understanding of what a type means. The meaning used in PL theory is somehow wider, but I don't really understand how.
And then there is my pet peeve: side effect. Those should be effects instead, because they largely define the observable behavior of the program. Computation, on the other hand, is a side effect, to the extent it doesn't affect the observable behavior.
But then PL theory is using "effect" for something completely different. I don't know what exactly, but clearly not something I would consider an effect.
* It implements `Write` for `TcpStream` which is likely what you were using. And `Write` requires `&mut`, probably because the same trait is used for writing to application buffers (e.g. through `BufWriter` or just to a `Vec`). So this doesn't compile: [2] I think the error message is pretty good; maybe it's improved since you tried. It doesn't though suggest the trick below.
* It also implements `Write` for `&TcpStream`. So if you use the awkward phrasing `(&stream).write_all(b"asdf")`, you don't need mutable access. [3] This allows you to use it with anything that takes the trait, without requiring mutability. You might need this if say you're reading from the socket from one thread while simultaneously writing to it from another thread.
It's a vaguely similar situation with the most common async equivalent, `tokio::net::TcpStream`. The most straightforward way to write to it is with a trait that needs mutable access, but it is possible to write to a shared reference by not using a trait. They also have these `split` and `into_split` methods to get mutable access to something that implements the read traits and something that implements the write traits.
[1] https://doc.rust-lang.org/std/net/struct.TcpStream.html
[2] https://play.rust-lang.org/?version=stable&mode=debug&editio...
[2] https://play.rust-lang.org/?version=stable&mode=debug&editio...
Could you elaborate on that? We consider misleading error messages to be bugs and would like to know more on case we could fix them.
When I first had learned that rust had this concept of “opt-in” mutability, I thought that it must then be an accepted pattern that we make as little as possible be mutable in an attempt to help us better reason about the state of our program. I had come to rust after learning some clojure so I was like “ahh! Immutable by default! So everything’s a value!”
But in reality it feels like rust code is not “designed” around immutability but instead around appeasing the borrow checker - which is actually pretty easy to reason about once you get the hang of the language. But it’s a ton to learn up front
You indirectly deal with this kind of thing when compiling web server code too. It compiles super slow and can have weird errors. This is because the people who build the web stack in rust used a million traits/generics/macros etc.
Even if you look at something like an io_uring library in rust, it uses a bunch of macros instead of just repeating 3 lines of code.
You can click the source code link and read the code here. Macros aren’t needed at all if every single operation isn’t a different type.
Could even enable some stuff like passing loggers around not by parameters but by effect.
I truly do wish we get closer to Ada and even Lean in terms of safety, would be great to see all these theoretical type system features become reality. I use the `anodized` crate right now for refinement type features, and who knows, maybe we get full fledged dependent types too as there aren't many production languages with them and certainly not popular languages.
- some things (compile time bounds checking tensor shapes) are hard / impossible to implement now; "pattern types" could be great for that
- however "no panic" is already handled by clippy, might not be much uplift for doing that at a type level.
my 2c: it's great to be excited and brainstorm, some of these ideas might be gold. conveying the benefit is key. it would be good to focus on stuff for which rust doesn't already have a workable solution. i like the pattern types, the rest would take convincing
A language’s type system doesn’t need to model every possible type of guarantee. It just needs to provide a type safe way to do 95% of things and force its users to conform to use the constructs it provides. Otherwise it becomes a buggy hodge podge of features that interact in poor and unpredictable ways. This is already the case in Scala; we’ve discovered almost 20 bugs in the compiler in the past year.
Actually this is the exact point of a type system. Why would you want to write unit tests for stuff the compiler can guarantee for you at the type system level?
I find that CLI is a great way to model problems. When I find myself doing something that has graduated beyond a comfortable amount of PowerShell, Rust is there for me.
I have a template I've been evolving so it's super easy to get started with something new; I just copy the template and slam Copilot with some rough ideas on what I want and it works out.
https://github.com/teamdman/teamy-rust-cli
Just today used it to replace a GitHub stats readme svg generator thing that someone else made that was no longer working properly.
https://github.com/TeamDman/teamy-github-readme-stats
Decomposes the problem very nicely into incrementally achievable steps
1. `fetch <username>` to get info from github into a cache location 2. `generate <username> <output.svg>` to load stats and write an svg 3. `serve` to run a webserver to accept GET requests containing the username to do the above
Means that my stuff always has `--help` and `--version` behaviours too
I'm coming from F# and find rust a good compromise: great type safety (though I prefer the F# language) with an even better ecosystem. It can also generate decently sized statically compiled executables, useful for CLI tools, and the library code I wrote should be available to mobile apps (to be developed).
Rewriting existing code for karma points and GitHub stars. Plus some minority actually trying to build something new.
Rust is fast tracking being as bad as c++ in terms of just garbage in it.
IMO the worst thing about c++ isn't that it is unsafe but it is extemely difficult to learn to a satisfying degree.
This is already kind of feels true for Rust and it will be surely true if people just keep shoving their amazing ideas into it.
IMO even async/tokio/error-handling aren't that well though out in rust. So much for keeping things out of the language.
Maybe Rust just wasn't what I wanted and I am salty about it but it feels a bit annoying when I see posts like this and considering where Rust is now after many years of shoving stuff into it
That's actually the point. Many of these additions can be phrased as unifying existing features and allowing them to be used in previously unusable ways and contexts. There's basically no real increase in user-perceived complexity. The Rust editions system is a key enabler of this, and C++ has nothing comparable.
Rust editions don't cover all use cases that one can think of regarding language evolution, and requires full access to source code.
What do you mean? Editions don't require full access to source code. Rust in general relies heavily on having access to source code, but that has nothing to do with how editions work
Could you elaborate more on this? It's not obvious to me right now why (for example) Crate A using the 2024 edition and Crate B using the 2015 edition would require both full access to both crates' source beyond the standard lack of a stable ABI.
See the Rust documentation on what editions are allowed to change, and the advanced migration guide on examples regarding manual code migration.
Not so much what has happened thus far, rather the limitations imposed in what is possible to actually break across editions.
To be fair, Rust tooling does tend toward build-from-source. But this is for completely different reasons than the edition system: if you had a way to build a crate and then feed the binary into builds by future compilers, it would require zero additional work to link it into a crate using a different edition.
Editions buy migration safety and let the standard evolve, but they do not shrink the mental model newcomers must carry and they force tooling and libraries to support multiple modes at once, which is a different kind of maintenance tax than evolving C++ compilers and feature test macros impose.
Require RFCs to include an interaction test matrix, compile time and code size measurements, and a pass from rust-analyzer and clippy so ergonomics regressions are visible before users hit them.
I'm not entirely sure I agree? I don't think any library except for the standard library needs to "support multiple modes at once"; everything else just sets its own edition and can remain blissfully unaware of whatever edition its downstream consumer(s) are using.
> which is a different kind of maintenance tax than evolving C++ compilers and feature test macros impose.
I'm not sure I agree here either? Both Rust and C/C++ tooling and their standard libraries needs to support multiple "modes" due to codebases not all using the same "mode", so to me the maintenance burden should be (abstractly) the same for the two.
> Require RFCs to include an interaction test matrix, compile time and code size measurements, and a pass from rust-analyzer and clippy
IIRC rustc already tracks various compilation-related benchmarks at perf.rust-lang.org. rustc also has edition-related warnings [0] (see the rust-YYYY-compatibility groups), so you don't even need clippy/rust-analyzer.
Large Rust organizations often run mixed-edition workspaces because upgrading hundreds of crates simultaneously is impractical. Libraries in the workspace therefore interact across editions during migration periods. So while technically each crate chooses its edition, ecosystem reality introduces cross-edition friction.
Feature test macros in C and C++ primarily gate access to optional APIs or compiler capabilities. Rust editions can change language semantics rather than merely enabling features. Examples include changes to module path resolution, trait object syntax requirements such as dyn, or additions to the prelude. Semantic differences influence parsing, name resolution, and type checking in ways that exceed the scope of a conditional feature macro.
Tooling complexity is structurally different. Rust tools such as rustc, rust analyzer, rustfmt, and clippy must understand edition dependent grammar and semantics simultaneously. The tooling stack therefore contains logic branches for multiple language modes. In contrast, feature test macros generally affect conditional compilation paths inside user code but do not require parsers or analysis tools to support different core language semantics.
Rust promises permanent support for previous editions, which implies that compiler infrastructure must preserve older semantics indefinitely. Over time this creates a cumulative maintenance burden similar to maintaining compatibility with many historical language versions.
Do you have some concrete examples of this outside the expected bump to the minimum required Rust version? I'm coming up blank, and this sounds like it goes against one of the primary goals of editions (i.e., seamless interop) as well.
> So while technically each crate chooses its edition, ecosystem reality introduces cross-edition friction.
And this is related to the above; I can't think of any actual sources of friction in a mixed-edition project beyond needing to support new-enough rustc versions.
> Rust tools such as rustc, rust analyzer, rustfmt, and clippy must understand edition dependent grammar and semantics simultaneously.
I'm not entirely convinced here? Editions are a crate-wide property and crates are Rust's translation units, so I don't think there should be anything more "simultaneous" going on compared to -std=c++xx/etc. flags.
> Over time this creates a cumulative maintenance burden similar to maintaining compatibility with many historical language versions.
Sure, but that's more or less what I was saying in the first place!
Beyond that, what the article shows is exactly what I want, I want as much type safety as possible especially for critical systems code which is increasingly what Rust is being used for.
I don't think it is about having committee, but rather having a spec. And I mean spec, not necessarily ISO standard. There should be a description of how specific features work, what is expected behavior, what is unexpected and should be treated as bug, and what is rationale behind specific decision.
Coincidentally people here hate specs as well, and that explains some things.
I know there is some work on Rust spec, but it doesn't seem to progress much.
C++ is not cohesive at all
Examples of cohesive languages designed by committees would be Ada and Haskell.
Geez I'd hate to be in rust dev shoes if I can't remove something later when I have a better better min/max. I guess this could be done off main, stable.
Rust's development process is also design by committee, interestingly enough.
Sure, but it's still quite informal and they just add things as they go instead of writing a complete standard and figuring out how everything interacts before anything is added to the language. Design-by-committee was probably not the best term to use.
(I should note that of all of the features mentioned in this blog post, the only one I actually expect to see in Rust someday is pattern types, and that's largely because it partially exists already in unstable form to use for things like NonZeroU32.)
https://blog.rust-lang.org/inside-rust/2022/07/27/keyword-ge...
Maybe he's not on the language team (I haven't read enough into Rust governance structures to know definitively) but it's not like he's on some random person working on this. And yes, work takes time, I actually disagreed with his initial approach where his syntax was to have a bunch of effects before the function name, and everyone rightly mentioned how messy it was. So they should be taking it slow anyway.
(The communication aspect of this is something that has bothered me many times in the past- even people who are lang team members often phrase things in a way that makes it sound like something is on its way in, when it's still just in the stage of "we're kinda noodling with ideas.")
I remember adding lifetimes in some structs and then wanted to use generics and self pointing with lifetimes because that made sense, and then it didn't work because the composition of some features was not yet part of Rust.
Another thing: there are annotations for lifetimes in function signatures, but not inside the functions where there is a lot of magic happening that makes understanding them and working with them really hard: after finally the borrow checking gave me errors, that's when I just started to getting lots of lifetime errors, which were not shown before.
Rust should add these features but take out the old ones with guaranteed automatic update path.
> But there are type systems which can provide even more guarantees. One step beyond “use at most once” is “use exactly once”. Types that provide that guarantee are called “linear” and in addition to guaranteeing the absence of “use after free” they can also guarantee the absence of memory leaks.
Does anyone know why these are called “affine” and “linear”, respectively? What’s the analogy, if any, to the use of those terms in math? (E.g. an affine transformation of vector spaces)
Do these people ever ship anything? Or is it just endless rearranging of deckchairs?
It is easier to understand musl-libc code compared to understanding a http library in rust which is just insane to me.
But then I actually tried both TypeScript and C#, and no. Writing correct async code in those languages is not any nicer at all. What the heck is “.ConfigureAwait(false)”? How fun do you really think debugging promise resolution is? Is it even possible to contain heap/GC pressure when every `await` allocates?
Phooey. Rust async is doing just fine.
And embedded systems vastly outnumber classical computers. Every classical computer includes several microcontrollers, every car, modern appliance, camera, toy, etc does too. Safe languages for embedded and OSes is very important. Rust just happens to be pretty good for other use cases too, which is a nice bonus. But that means the language can't be tied to a single prescribed runtime. And that it can't have a GC, etc.
As might microEJ, Meadows, F-Secure, Astrobe and a few others.
Never heard of either. You will have to expand on your reasoning. Microcontrollers do outnumber classical computers though, that is just a fact. So i don't see why there is anything to disagree about there. Even GPUs have helper microcontrollers for thermal management and other functions.
I bet many of those helper microcontrollers, are still Assembly, compiler specific C, and if there is Rust support, most likely only no_std fits, thus no async anyway.
Many microcontrollers are indeed still running C, but things are starting to change. Esperif has official support for Rust for example, and other vendors are experimenting with that too. Many other microcontrollers have good community support.
> if there is Rust support, most likely only no_std fits, thus no async anyway.
This is just plain incorrect. The beauty of async in rust is that it does work on no_std. You don't need an allocator even to use Embassy. Instead becuause async tasks are perfectly sized, you can reserve space statically at compile time, you just need to specify with an attribute how many concurrent instances of a given task should be supported.
Interesting how compiler specific extensions are ok for C, with a freestanding subset, or Rust no_std, but when it goes to other languages it is no longer the same.
I stand corrected on async Rust then.
Not sure what you mean here. For Rust there is only one de facto compiler currently, though work is ongoing on gccrs (not to be confused with rustc_codegen_gcc, which only replaces the llvm backend but keeps the rest of the compiler the same). Work is also ongoing on an official spec. But as it currently stands there are no compiler specific extensions.
If you meant the attribute I mentioned for embassy? That is just processed by a rust proc-macro, similar to derives with serde is used to generate (de)serialisation code. It too adds custom attributes on members.
I admit the skill issue on my part, but I genuinely struggled to follow the concepts in this article. Working alongside peers who push Rust's bleeding edge, I dread reviewing their code and especially inheriting "legacy" implementations. It's like having a conversation with someone who expresses simple thoughts with ornate vocabulary. Reasoning about code written this way makes me experience profound fatigue and possess an overwhelming desire to return to my domicile; Or simply put, I get tired and want to go home.
Rust's safety guardrails are valuable until the language becomes so complex that reading and reasoning about _business_ logic gets harder, not easier. It reminds me of the kid in "A Christmas Story" bundled so heavily in winter gear he cant put his arms down[0]. At some point, over-engineered safety becomes its own kind of risk even though it is technically safer in some regards. Sometimes you need to just implement a dang state machine and stop throwing complexity at poorly thought-through solutions. End old-man rant.
[0]: https://youtu.be/PKxsOlzuH0k?si=-88dxtyegTxIvOYI
Like, the ability to have multiple mut references to a struct as long as you access disjoint fields? That's amazing!!! A redo of Pin that actually composes with the rest of the language? That's pretty awesome too.
I think you're getting tied up because the the author is describing these features in a very formal way. But someone has to think about these things, especially if they are going to implement them.
Ultimately, these are features that will make Rust's safety features (which really, are Rust's reason for existing) more ergonomic and easier to use. That's the opposite of what you fear.
Rust instead has all these implicit things that just happen, and now needs ways to specify that in particular cases, it doesn't.
He's talking about this problem. Can this code panic?
You can't easily answer that in Rust or Zig. In both cases you have to walk the entire call graph of the function (which could be arbitrarily large) and check for panics. It's not feasible to do by hand. The compiler could do it though.Failable memory allocations are already needed for Rust-on-Linux, so that also has independent interest.
although, I think i'd prefer a "doesn't panic" effect just to keep backwards compatibility (allowing functions to panic by default)
Rust tries to prevent developers from doing bad things, then has to include ways to avoid these checks for cases where it cannot prove that bad things are actually OK. Zig (and many others such as Odin, Jai, etc.) allow anything by default, but surface the fact that issues can occur in its API design. In practice the result is the same, but Rust needs to be much more complex both to do the proving and to allow the developers to ignore its rules.
[1]: https://ziglang.org/documentation/0.15.2/std/#std.math.divEx...
I'd be interested if this weren't true, since the only feasible compiler solutions to preventing division-by-0 errors are either: defining the behaviour, which always ends up surprising people later on, or; incredibly cumbersome or underperformant type systems/analyses which ensure that denominators are never 0.
It doesn't look like Zig does either of these.
[0]: https://ziglang.org/documentation/master/#Division-by-Zero
I don't think it's very cumbersome if the compiler checks if the divisor could be zero. Some programming languages (Kotlin, Swift, Rust, Typescript...) already do something similar for possible null pointer access: they require that you add a check "if s == null" before the access. The same can be done for division (and remainder / modulo). In my own programming language, this is what I do: you can not have a division by zero at runtime, because the compiler does not allow it [1]. In my experience, integer division by a variable is not all that common in reality. (And floating point division does not panic, and integer division by a non-zero constant doesn't panic either). If needed, one could use a static function that returns 0 or panics or whatever is best.
[1] https://github.com/thomasmueller/bau-lang/blob/main/README.m...
The zig compiler can’t possibly guarantee this without knowing which parts of the code were written by you and which by other people (which is impossible).
So really it’s not “the developer” wrote the thing that does the panic, it’s “some developer” wrote it. And how is that different from rust?
Could you share a situation where the behavior is necessary? I am curious if I could work around it with the current feature set.
Perhaps I take issue with peers that throw bleeding edge features in situations that don't warrant them. Last old-man anecdote: as a hobbyist woodworker it pains me to see people buying expensive tools to accomplish something. They almost lack creativity to use their current tools they have. "If I had xyz tool I would build something so magnificent" they say. This amounts to having many, low-quality single-purpose built tools where a single high-quality table saw could fit the need. FYI, a table-saw could suit 90% of your cutting/shaping needs with a right jig. I don't want this to happen in rust.
The effects mentioned in the article are not too uncommon in embedded systems, particularly if they are subject to more stringent standards (e.g., hard realtime, safety-critical, etc.). In such situations predictability is paramount, and that tends to correspond to proving the absence of the effects in the OP.
I do wonder if it is possible to bin certain features to certain, uh, distributions(?), of rust? I'm having trouble articulating what I mean but in essence so users do not get tempted to use all these bells and whistles when they are aimed at a certain domain or application? Or are such language features beneficial for all applications?
For example, sim cards are mini computers that actually implement the JVM and you can write java and run it on sim cards (!). But there is a subset of java that is allowed and not all features are available. In this case it is due to compute/resource restrictions, but something to a similar tune for rust, is that possible?
On that note, I guess one could hypothetically limit certain effects to certain Rust subsets (for example, an "allocates" effect may require alloc, a "filesystem" effect may require std, etc.), but I'd imagine the general mechanism would need to be usable everywhere considering how foundational some effects can be.
> Or are such language features beneficial for all applications?
To (ab)use a Pixar quote, I suppose one can think of it as "not all applications may need these features, but these features should be usable anywhere".
But this kinda isn't about "behavior" of your code; it's about how the compiler (and humans) can confidently reason about your code?
I am biased to think more features negatively impact how humans can reason about code, leading to more business logic errors. I want to understand, can we make the compiler understand our code differently without additional features, by weidling mastery of the existing primatives? I very well may be wrong in my bias. But human enginuity and creativity is not to be understated. But neither should lazyness. Users will default to "out of box" solutions over building with language primatives. Adding more and more features will dilute our mastery of the fundamentals.
Same thing happens in real time trading systems, distributed systems, databases, etc., you have to design some super critical hot path that can never fail, and you want a static guarantee that that is the fact.
People say "Rust is more complex to onboard to, but it is worth it", but a lot of the onboarding hurdle is the extra complexity added in by experts being smart. And it may be a reason why a language doesn't get the adoption that the creators hoped for (Scala?). Rust does not have issues with popularity, and the high onboarding barrier, may have positive impact eventually where "Just rewrite it in Rust" is no more, and people only choose Rust where it is most appropriate. Use the right language for the tool.
The complexity of Rust made me check out Gleam [0], a language designed for simplicity, ease of use, and developer experience. A wholly different design philosophy. But not less powerful, as a BEAM language that compiles to Erlang, but also compiles to Javascript if you want to do regular web stuff.
[0] https://gleam.run
This can happen in any language and is more indicative of not having a strong lead safeguarding the consistency of the codebase. Now Scala has had the added handicap of being able to express the same thing in multiple ways, all made possible in later iterations of Scala, and finally homogenised in Scala 3.
Thing is, the alternative to "smart" code that packs a lot into a single line is code where that line turns into multiple pages of code, which is in fact worse for understanding. At least with PL features, you only have to put in the work once and you can grok how they're meant to be used anywhere.
Don't get me wrong, rust has plenty of "weird" features too, for example higher rank trait bounds have a ridiculous syntax and are going to be hard for most people to understand. But, almost no one will ever have to use a higher rank trait bound. I encounter such things much more rarely in rust than in almost any other mainstream language.
Most people conflate "complexity" and "difficulty". Rust is a less complex language than Python (yes, it's true), but it's also much more difficult, because it requires you to do all the hard work up-front, while giving you enormously more runtime guarantees.
* no-panic: https://docs.rs/no-panic/latest/no_panic/
* Safe Rust has no undefined behavior: https://news.ycombinator.com/item?id=39564755
I read the HN comments before I read the OP, which made me worry that the post was going to be some hifalutin type wonkiness. But instead the post is basically completely milquetoast, ordinary and accessible. I'm no type theorist--I cannot tell you what a monad is--and I confess that I don't understand how anyone could be intimidated by this article.
Things like effects aren't obvious to people, at least based on my experience of trying to teach it to people
The intro where they describe effects as essentially being "function colors" (referring to another article fairly often linked in hackernews) plus give lots of concrete examples (async, const, try) seems like more than enough to be obvious to the readers.
1. Go with a better type system. A compiled language, that has sum types, no-nil, and generics.
2. A widely used, production, systems language that implements PL-theory up until the year ~2000. (Effects, as described in this article, was a research topic in 1999).
I started with (1), but as I started to get more and more exposed to (2), you start looking back on times when you fought with the type system and how some of these PL-nerds have a point. I think my first foray into Higher-Kinded Types was trying to rewrite a dynamic python dispatch system into Rust while keeping types at compile time.
The problem is, many of these PL-nerd concepts are rare and kind of hard to grok at first, and I can easily see them scaring people off from the language. However I think once you understand how they work, and the PL-nerds dumb down the language, I think most people will come around to them. Concepts like "sum types" and "monads", are IMO easy to understand concepts with dumb names, and even more complex standard definitions.
I was looking for something like that and eventually found Crystal (https://crystal-lang.org) as a closest match: LLVM compiled, strong static typing with explicit nulls and very good type inference, stackfull coroutines, channels etc.
Some things just need precise terminology so humans can communicate about them to humans without ambiguity. It doesn't mean they're inherently complex: the article provides simple definitions. It's the same for most engineering, science and language. One of the most valuable skills I've learned in my career is to differentiate between expressions, statements, items, etc. - how often have you heard that the hardest problem in software development is coordinating with other developers? If you learn proper terminology, you can say exactly what you mean. Simple language doesn't mean more clear.
I wasn't born knowing Rust, I had to learn it. So I'm always surprised by complaints about Rust being too complex directed at the many unremarkable people who have undergone this process without issues. What does it say, really? That you're not as good as them at learning things? In what other context do software people on the internet so freely share self-doubt?
I also wonder about their career plans for the future. LLMs are already capable of understanding these concepts. The tide is rising.
I would love to introduce more rust at work, but I dread that someone is going to ask about for<'a>, use<'a>, differences between impl X vs Box<dyn X>, or Pin/Unpin, and I don't have proper answers either.
its an issue in "someone", all those answers can be received in under 1min from AI.
But I now realize that as people grow, their desire to learn new things sometimes fades, and an illusion of "already knowing enough" may set in.
Don't trust that illusion. We still have to learn new things every day. (And new fancy words for simple concepts is the easy part.)
you can use smart pointers everywhere and stop be bothered by borrow checker.
I didn't understand that you were making fun of verbosity until the word 'domicile'. I must be one of those insufferable people who expresses simple thoughts with ornate vocabulary...
The article was comprehensible to me, and the additional function colorings sound like exciting constraints I can impose to prevent my future self from making mistakes rather than heavy winter gear. I guess I'm closer to the target audience?
And later, when I read "Because though we’re not doing too bad, we’re no Ada/SPARK yet" I couldn't help thinking that there must be a reason why those languages never became mainstream, and if Rust gets more of these exciting esoteric features, it's probably headed the same way...
But these top-comments sometimes paint with a broad brush. As in this case.
> I admit the skill issue on my part, but I genuinely struggled to follow the concepts in this article. Working alongside peers who push Rust's bleeding edge, I dread reviewing their code and especially inheriting "legacy" implementations. It's like having a conversation with someone who expresses simple thoughts with ornate vocabulary. Reasoning about code written this way makes me experience profound fatigue and possess an overwhelming desire to return to my domicile; Or simply put, I get tired and want to go home.
Two paragraphs in and nothing concrete yet. We can contrast with the article. Let’s just consider the Effects section.
It describes four examples: functions that 1) don’t unwind, 2) guaranteed termination, 3) are deterministic 4) do not “call host APIs”, which is “IO” somehow? (this last one seems a bit off)
The first point is about not panicking (keyword panic given). Point two is about not looping forever, for example. Point three can be contrasted with non-determinism. Is that jargony? A fancy-pants term for something simpler? The fourth point seems a bit, I don’t know, could be rewritten.
All of these at least attempt to describe concrete things that you get out of an “effects system”.
> Rust's safety guardrails are valuable until the language becomes so complex that reading and reasoning about _business_ logic gets harder, not easier. It reminds me of the kid in "A Christmas Story" bundled so heavily in winter gear he cant put his arms down[0]. At some point, over-engineered safety becomes its own kind of risk even though it is technically safer in some regards. Sometimes you need to just implement a dang state machine and stop throwing complexity at poorly thought-through solutions. End old-man rant.
This is just a parade of the usual adjectives with an unexplained analogy thrown in (how will these additions cripple Rust usage?). “So complex”, “over-engineered safety”, “complexity” (again), “poorly thought-throught solutions”.
TFA is about concrete things. OP here is a bunch of adjectives. And a bunch of words about complexity and not understanding would be fine if TFA did not have any understandable, practical parts. But as we’ve gone over it does...
People see Rust. Then they see a comment reacting to nondescript complexity. The rest is history.
A good anti-complexity comment would address something concrete like Async Rust. And there are plenty of such comments to vote on.
Rust requires actual reading, like Typescript, only more detailed.
However, it also has true sum types:
But as you can see the syntax leaves a lot to be desired and may not be all that obvious to those who are hung up thinking in other languages.Right — While it does has sum types, it doesn't have some other features found in other languages.
But, of course, if one wanted those features they would talk about those features. In this discussion, we're talking specifically about sum types, which Go most definitely does have.
> nil -- don't forget nil!
This is why alternative syntax has never been added. Nobody can figure out how to eliminate nil or make it clear that nil is always part of the set in a way that improves upon the current sum types.
The set of objects that can fulfill that interface is not just string and int, it’s anything in the world that someone might decide to write an isSumType function for.
No. Notice the lowercase tag name. It is impossible for anyone else to add an arbitrary type to the closed set.
Unless your argument is that sum types fundamentally cannot exist? Obviously given a more traditional syntax like,
...one can come along and add C just the same. I guess that is true in some natural properties of the universe way. It is a poor take in context, however.At a new job, I am writing my first microservice in golang. Used to be a Rust/C++ (kernel) and Python/PHP/JS dev (fullstack). Rust is allowed by team is heavily invested in go already.. I don't think I'll be able to convince them to learn rust! Lol
As a user, using a feature such as pattern types will be natural if you know the rest of the language.
Do you have a function that accepts an enum `MyEnum` but has an `unreachable!()` for some variant that you know is impossible to have at that point?
Then you can accept a `MyEnum is MyEnum::Variant | MyEnum::OtherVariant` instead of `MyEnum` to tell which are the accepted variants, and the pattern match will not require that `unreachable!()` anymore.
The fact someone does not know this is called "refinement types" does not limit their ability to use the feature effectively.
That might be true, but it shows the direction that Rust is talking: put in the kitchen sink, just like C++ and Scala did. And _that_ is very much important for users.
Also I do not think that adding features is always bad to the point of comparing with Scala. Most of the things the article mentions will be almost invisible to users. For example, the `!Forget` thing it mentions will just end up with users getting new errors for things that before would have caused memory leaks. What a disgrace!
Then, pattern types allow you to remove panics from code, which is super helpful in many critical contexts where Rust is used in production, even in the Linux kernel once they will bump the language version so far.
Ironically, Rust was unusably hard prior to the late-2018 edition. So now your academic article has to explain both the baseline borrow checker and non-lexical lifetimes, a vast increase in complexity.