data, _ := os.ReadFile(path)
Saying that is not explicit is just wrong.In a language like Rust, if the return type is `Result<MyDataType, MyErrorType>`, the caller cannot access the `MyDataType` without using some code that acknowledges there might be an error (match, if let, unwrap etc.). It literally won't compile.
> But if you don't know Go, it's just an underscore.
And if you don't know rust, .unwrap is just a getter method.
But I'm just explaining the argument as I understand it to the commenter who asked. I'm not saying it is right. They have tradeoffs and perhaps you prefer Go's tradeoffs.
What if it’s a function that returns the coordinates of a vector and you don’t care about the y coordinate?
x, _ :=
With the topic of .unwrap() _ is referencing an ignored error. Better laid out as: func ParseStringToBase10i32AndIDoNotCare(s string) {
i, _ := strconv.ParseInt(s, 10, 32)
return i
}
Un-handled errors in Go keeps the application going were rust crashes .unwrap().Ignoring an output data value or set is just fine. Don't always need the key and value of a map. Nor a y axes in vector<x,y,z> math.
I think you can make the same argument here - rust provides unwrap and if you don’t know go, that’s just how you get the value out of the Result Type.
a, err := f()
// whether you forgot to handle the `err` or not,
// the `a` carries a zero value, or some other value.
In rust it's not the case, as the `T` in `Result<T, E>` won't be constructed in case of an error.(Or in your commit hook. If you want to develop without worrying about such things, and then clean it up before checkin, that's a development approach that go is perfectly fine with.)
And the more I work with Go, the less I understand why warnings were not added to the compiler. Essentially instead of having them in the compiler itself, one needs to run a tool, which will have much smaller user base.
But anyway, in Go, it's sometimes fine to have both non-nil error and a result, e.g. the notorious EOF error.
But this makes the language feel like Python, in some ways. Besides nil, the lack of expressivity in its expressions makes it more idiomatic to write things imperatively with for loops and appending to slices instead of mapping over the slice. Its structurally typed interfaces feel more like an explicit form of duck typing.
Also, Go has generics now, finally.
From what I remember of a presentation they had on how and why the made Go, this is no coincidence. They had a lot of Python glue code at Google, but had issues running it in production due to mismatched library dependencies, typing bugs, etc. So they made Go to be easy to adopt their Python code to (and especially get the people writing that code to switch), while addressing the specific production issues they faced.
disagree, they made many decisions which are different from mainstream: OOP, syntax as examples.
I would argue that after a short tutorial on basic syntax, it's easier for a Python/JavaScript programmer to understand Go code than Rust.
For some uses, that's all you need, and having more features often detract from your experience. But I'm doubtful on how often exactly, I have been able to carve out a simple sub-language that is easier to use than go from every stack that I've tried.
I'm not convinced that you couldn't have good software engineering support and simplicity in the same language. But for a variety of mostly non-technical reasons, no mainstream language provides that combination, forcing developers to make the tradeoff that they perceive as suiting them best.
But that's never going to happen because there is a very vocal subpopulation (I think it is a minority, but don't know for sure, and it is hard to measure that kind of thing) that is opposed to any "syntactic sugar" and wants it to stay the way it is.
https://go.dev/blog/error-syntax
tl;dr - proposals are no longer being considered
Hot take, maybe, but this is one of the few "mistakes" I see with Go. It makes adding QoL things like you mentioned difficult, requires shoehorning pointers to allow for an unset condition, some types don't have a safe default/zero value like maps, and makes comparisons (especially generic) overly complex.
type AccountName string
except you write it like (abridged) type AccountName refined.Scalar[AccountName, string]
func (AccountName) IsValid(value string) bool {
return accountNameRegexp.MatchString(value)
}
and that enforces an invariant through the type system. In this case, any instance of type AccountName needs to hold a string conforming to a certain regular expression. (Another classical example would be "type DiceRoll int" that is restricted to values 1..6.)But then you run into the problem with the zero value, where the language allows you to say
var name AccountName // initialized to zero value, i.e. empty string
and now you have an illegal instance floating around (assuming for the sake of argument that the empty string is not a legal account name). You can only really guard against that at runtime, by panic()ing on access to a zero-valued AccountName. Arguably, this could be guarded against with test coverage, but the more insidious variant is type AccountInfo struct {
ID int64 `json:"id"`
Name AccountName `json:"name"`
}
When you json.Unmarshal() into that, and the payload does not contain any mention of the "name" field, then AccountName is zero-valued and does not have any chance of noticing. The only at least somewhat feasible solution that I could see was to have a library function that goes over freshly unmarshaled payloads and looks for any zero-valued instances of any refined.Scalar type. But that gets ugly real quick [1], and once again, it requires the developer to remember to do this.[1] https://github.com/majewsky/gg/blob/refinement-types-4/refin...
So yeah, I do agree that zero values are one of the language's biggest mistakes. But I also agree that this is easier to see with 20 years of hindsight and progress in what is considered mainstream for programming languages. Go was very much trying to be a "better C", and by that metric, consistent zero-valued initialization is better than having fresh variables be uninitialized.
There are some articles about the diagnostic pattern in Zig, e.g. [1], [2]
[1] https://github.com/ziglang/zig/issues/2647#issuecomment-5898...
Let's say you have this:
``` part, err := doSomething() if err != nil { return nil, err }
data, err := somethingElse(part) if err != nil { return nil, err }
return data, nil ```
Then as long as your function followed the contract 0+ returns and then 1 `error` return, that could absolutely be turned into just the 0+ returns and auto-return error.
The fact that the `Error` interface is easy to match and extend, plus the common pattern of adding an error as the last return makes this possible.
What am I missing here?
There's not much broken with the error type itself, but the "real" problem is that the Go team decided not to change the way errors are handled, so it becomes a question of error handling ergonomics.
The article doesn't have a clear focus unfortunately, and I think it's written by an LLM. So I think it's more useful to read the struggles on the Go team's article
What are we doing here?
Many developers, especially those in a rush, or juniors, or those coming from exception-based languages, tend to want to bubble errors up the call stack without much thought. But I think that's rarely the best approach. Errors should be handled deliberately, and those handlers should be tested. When a function has many ways in which it can fail, I take it as a sign to rethink the design. In almost every case, it's possible to simplify the logic to reduce potential failure modes, minimizing the burden of writing and testing error handling code and thus making the program more robust.
To summarize, in my experience, well-written code handles errors thoughtfully in a few distinct places. Explicit error handling does not have to be a burden. Special language features are not strictly necessary. But of course, it takes a lot of experience to know how to structure code in a way that makes error handling easy.
Sure … it is true that Go errors can carry data, and Zig ones perhaps do not, but I don't see how that is what disqualifies a `try` from being possible. Rust's errors are rich, and Rust had `try!` (which is now just `?`).
The article's reasoning around rich errors seems equally muddled.
> In Zig, there's no equivalent. If both calls fail with error.FileNotFound, the error value alone can't tell you which file was missing.
Which is why I'm not a huge fan of Zig's error type … an int cannot convey the necessary context! (Sometime I'd've thought C had so thoroughly demonstrated as bad with, e.g., `mkdir` telling people "No such file or directory." — yeah, no such directory, that's why I'm calling `mkdir` — that all new designs would not be limited to using integer error codes.)
But then we go for …
> Zig's answer is the Error Return Trace: instead of enriching the error value, the compiler tracks the error's path automatically.
But the error's "path" cannot tell you what file was missing, either…
> It tells you where the error traveled, not what it means. Rather than enriching the error value, Zig enriches the tooling.
Sure … like, again, a true-ish statement (or opinion), but one that just doesn't contribute to the point, I guess? A backtrace is also useful, but having the exact filename is useful, too. It depends on the exact bug that I'm tracking: sometimes the backtrace is what I need, sometimes the name of the missing file is what I need. Having both would be handy, depending on circumstance, and the call stack alone does not tell you the name of the missing file.
… how does either prevent a `try` statement?
We try to argue that somehow the stdlib would need to change, but I do not see how that can be. It seems like Go could add try, as syntactic sugar for the pattern at the top of the article. (& if the resulting types would have type errored before, they could after sugaring, etc.)
Most languages eventually end up confusing 'try-catch', errors, exceptions, handle?, re-throw?... Together with most programmers mixing internal erros, business errors, transient... Creating complex error types with error factories, if and elses... Everything returning the same simple error is simply genious
Also a lot of zig posts are tone def like this. "Oh look something so simple and we're the first to think about it. We must be really good"
... So what? From what I can tell that's all anyone has asked for in the context of something to just return nil/error up the call stack.
A little syntax sugar that won't break backwards compatibly and makes the intent of the code clearer is a win-win. I've never seen a really reasonable response from the Go team on why they don't want to do it.
Go acknowledges taking the design of the object file format from, IIRC, Modula-2. You are very wrong.
In this case no, it's not the case that go can't add a "try" keyword because its errors are unstructured and contain arbitrary strings. That's how Python works already. Go hasn't added try because they want to force errors to be handled explicitly and locally.
Once someone figures it out, they will come. The Go team has expressed wanting it.
In general or specifically in Go?
Yes, mimicking Zig's error handling mechanics in go is very much impossible at this point, but I don't see why we can't have a flavor of said mechanics.