Though I do wonder what the chances are that the C subset of C++ will ever add this feature. I use my own homespun "scope exit" which runs a lambda in a destructor quite a bit, but every time I use it I wish I could just "defer" instead.
Then again, if someone is willing to push it through WG21 no matter what, maybe.
C++ would be a nicer language with native defer. Working directly with C APIs (which is one of the main reasons to use C++ over Rust or Zig these days) would greatly benefit from it.
Working with native C APIs in C++ is akin to using unsafe in Rust, C#, Swift..., it should be wrapped in type safe functions or classes/structs, never used directly outside implementation code.
If folks actually followed this more often, there would be so much less CVE reports in C++ code caused by calling into C.
The reason why C++ is as popular as it is is in large part due to how easy it is to upgrade an existing C codebase in-place. Doing a complete RAII rewrite is at best a long term objective, if not often completely out of the question.
Acknowledging this reality means giving affordances like `defer` that allow upgrading C codebases and C++ code written in a C style easier without having to rewrite the universe. Because if you're asking me to rewrite code in a C++ style all in one go, I might not pick C++.
EDIT: It also occurs to me that destructors also have limitations. They can't throw, which means that if you encounter an issue in a dtor you often have to ignore it and hope it wasn't important.
I ran into this particular annoyance when I was writing my own stream abstractions - I had to hope that closing the stream in the dtor didn't run into trouble.
Yes, unfortunely that compatibility is also the Achilles hill of C++, so many C++ libraries that are plain old C libraries with extern "C { .... } added in when using a C++ compiler, and also why so many CVEs keep happening in C++ code.
If I'm constructing a particular C object once in my entire code base, calling a couple functions on it, then freeing it, I'm not much more likely to get it right in the RAII wrapper than in the one place in my code base I do it manually. At least if I have tools like defer to help me.
struct Example {
Example() = default;
~Example()
try {
// elease resources for this instance
} catch (...) {
// take care of what went wrong in the whole destructor call chain
}
};
-- https://cpp.godbolt.org/z/55oMarbqYNow with C++26 reflection, one could eventually generate such boilerplate.
The mechanism in Java I was alluding to is really the Throwable::addSuppressed method; it isn’t tied to the use of a try-block. Since Java doesn’t have destructors, it’s just that the try-with-resources statement is the canonical example of taking advantage of that mechanism.
I'm assuming that using defer would have prevented the gotos in the first case, and the bug.
C is hard enough as is to get right and every tool and development pattern that helps avoid common pitfalls is welcome.
By putting all the cleanup code at the end of the function after a cleanup label, you have reduced the complexity of resource management: you have one place where the resource is acquired, and one place where the resource is freed. This is actually manageable. Before you return, you check every resource you might have acquired, and if your handle (pointer, file descriptor, PID, whatever) is not in its null state (null pointer, -1, whatever), you call the free function.
By comparison, if you try to put the correct cleanup functions at every exit point, the problem explodes in complexity. Whereas correctly adding a new resource using the 'goto cleanup' pattern requires adding a single 'if (my_resource is not its null value) { cleanup(my_resource) }' at the end of the function, correctly adding a new resource using the 'cleanup at every exit point' pattern requires going through every single exit point in the function, considering whether or not the resource will be acquired at that time, and if it is, adding the cleanup code. Adding a new exit point similarly requires going through all resources used by the function and determining which ones need to be cleaned up.
C is hard enough as it is to get right when you only need to remember to clean up resources in one place. It gets infinitely harder when you need to match up cleanup code with returns.
int ret;
FILE *fp;
if ((fp = fopen("hello.txt", "w")) == NULL) {
perror("fopen");
ret = -1;
} else {
const char message[] = "hello world\n";
if (fwrite(message, 1, sizeof message - 1, fp) != sizeof message - 1) {
perror("fwrite");
ret = -1;
} else {
ret = 0;
}
/* fallible cleanup is unpleasant: */
if (fclose(fp) < 0) {
perror("fclose");
ret = -1;
}
}
return ret;
It is in particular universal in Microsoft documentation (but notably not actual Microsoft code; e.g. https://github.com/dotnet/runtime has plenty of cleanup gotos).In practice, well, the “of doom” part applies: two fallible functions on the main path is (I think) about as many as you can use it for and still have the code look reasonable. A well-known unreasonable example is the official usage sample for IFileDialog: https://learn.microsoft.com/en-us/windows/win32/shell/common....
Using defer, the code would be:
if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
return err;
return err;
This has the exact same bug: the function exits with a successful return code as long as the SHA hash update succeeds, skipping further certificate validity checks. The fact that resource cleanup has been relegated to defer so that 'goto fail;' can be replaced with 'return err;' fixes nothing.It would run regardless of if malloc succeeded or failed, but calling free on a NULL pointer is safe (defined to no-op in the C-spec).
resource, err := newResource()
if err != nil {
return err
}
defer resource.Close()
IMO this pattern makes more sense, as calling exit behavior in most cases won't make sense unless you have acquired the resource in the first place.free may accept a NULL pointer, but it also doesn't need to be called with one either.
RAII is not the right solution for C. I wouldn't want C to grow constructors and destructors. So far, C only runs the code you ask it to; turning variable declaration into a hidden magic constructor call would, IMO, fly in the face of why people may choose C in the first place.
In addition, RAII has it's own complexities that need to be dealt with now, i.e. move semantics, which obviously C does not have nor will it likely ever.
In the example above, the question of "do I put defer before or after the `if err != nil` check" is deferred to the programmer. RAII forces you to handle the complexity, defer lets you shoot yourself in the foot.
Related blog post from last year: https://thephd.dev/c2y-the-defer-technical-specification-its... (https://news.ycombinator.com/item?id=43379265)
https://oshub.org/projects/retros-32/posts/defer-resource-cl...
Of course, that idea already isn’t correct in many languages; function arguments are evaluated before a function is called, operator precedence often breaks it, etc, but this moves entire statements, potentially by many lines.
But there are lots of cases in the kernel where we have 10+ goto labels for error paths in complex setup functions. I think when this starts making its way into those areas it will really start having an impact on bugs.
Sure, most of those bugs are low impact (it's rare that an attacker can trigger the broken error paths) but still, this is basically free software quality, it would be silly to leave it on the table.
And then there's the ACTUAL motivation: it makes the code look nicer.
Genuinely curious as I only have a small amount of experience with c and found goto to be ok so far
In any case, the biggest advantage IMO is that resource acquisition and cleanup are next to each other. My brain understands the code better when I see "this is how the resource is acquired, this is how the resource will be freed later" next to each other, than when it sees "this is how this resource is acquired" on its own or "this is how the resource is freed" on its own. When writing, I can write the acquisition and the free at the same time in the same place, making me very unlikely to forget to free something.
using (var resource = acquire()) {
} // implicit resource.Dispose();
While we don't have the same simplicity in C because we don't use this "disposable" pattern, we could still perhaps learn something from syntax and use a secondary block to have scoped defers. Something like: using (auto resource = acquire(); free(resource)) {
} // free(resource) call inserted here.
That's no so different to how a `for` block works: for (auto it = 0; it < count; it++) {
} // automatically inserts it++; it < count; and conditional branch after secondary block of for loop.
A trivial "hack" for this kind of scoped defer would be to just wrap a for loop in a macro: #define using(var, acquire, release) \
auto var = (acquire); \
for (bool var##_once = true; var##_once; var##_once = false, (release))
using (foo, malloc(szfoo), free(foo)) {
using (bar, malloc(szbar), free(bar)) {
...
} // free(bar) gets called here.
} // free(foo) gets called here. using (var resource1 = acquire() {
using (var resource2 = acquire()) {
using (var resource3 = acquire()) {
// use resources here..
}
}
}
Compared to: var resource1 = acquire();
defer { release(resource1); }
var resource2 = acquire();
defer { release(resource2); }
var resource3 = acquire();
defer { release(resource3); }
// use resources here
Of course if you want the extra scopes (for whatever reason), you can still do that with defer, you're just not forced to. using (auto res1 = acquire1(); free(res1))
using (auto res2 = acquire2(); free(res2))
using (auto res3 = acquire3(); free(res3))
{
// use resources here
}
// free(res3); free(res2); free(res1); called in that order.
The argument for this approach is it is structural. `defer` statements are not structural control flow: They're `goto` or `comefrom` in disguise.---
Even if we didn't want to introduce new scope, we could have something like F#'s `use`[1], which makes the resource available until the end of the scope it was introduced.
use auto res1 = acquire1() defer { free(res1); };
use auto res2 = acquire2() defer { free(res2); };
use auto res3 = acquire3() defer { free(res3); };
// use resources here
In either case (using or use-defer), the acquisition and release are coupled together in the code. With `defer` statements they're scattered as separate statements. The main argument for `defer` is to keep the acquisition and release of resources together in code, but defer statements fail at doing that.[1]:https://learn.microsoft.com/en-us/dotnet/fsharp/language-ref...
But it adds a new dimension of control flow, which in a garbage collected language like Go is less worrisome whereas in C this can create new headaches in doing things in the right order. I don't think it will eliminate goto error handling for complex cases.
But people know it from other languages, and seem to like it, so I guess it is good to have it also in C.
http://robertseacord.com/wp/2020/09/10/adding-a-defer-mechan...
Cleanup is good. Jumping around with "goto" confused most people in practice. It seems highly likely that most programmers model "defer" differently in their minds.
EDIT:
IIRC it was CVE-2025-26465. Read the code and the patch.
With "goto" the cleanup-up can jump anywhere. With "defer" the cleanup cannot really jump anywhere. It is easier to mentally stick to simply cleaning up in a common sense way. And taking care of multiple "unrelated" clean-up steps is "handled for you."
(Attacks on this sometimes approach complaints about lack of "common sense".)
2. Defer is mostly useful for C++ code that needs to interact with C API because these two are fundamentally different. C API usually exposes functions "create_something" and "destroy_something", while the C++ pattern is to have an object that has "create_something" hidden inside its constructor, and "destroy_something" inside its destructor.
For example if I have a ffi function that transfers the ownership of some allocator in the middle of the function.
#define RETURN(x) result=x;goto CLEANUP
void myfunc() {
int result=0;
if (commserror()) {
RETURN(0);
}
.....
/* On success */
RETURN(1);
CLEANUP:
if (myStruct) { free(myStruct); }
...
return result
}
The advantage being that you never have to remember which things are to be freed at which particular error state. The style also avoids lots of nesting because it returns early. It's not as nice as having defer but it does help in larger functions.You also don't have to remember this when using defer. That's the point of defer - fire and forget.
There are several other issues I haven't shown like what happens if you need to free something only when the return code is "FALSE" indicating that something failed.
This is not as nice as defer but up till now it was a comparatively nice way to deal with those functions which were really large and complicated and had many exit points.
result=x;
goto cleanup;
if you meant result=x;
goto cleanup;
At least then you'll be able to follow the control flow without remembering what the magic macro does.Once they do learn about defer they will come to appreciate it much more.
The point of a CS degree is to know the fundamentals of computing, not the latest best practices in programming that abstract the fundamentals.
https://floooh.github.io/2018/06/17/handles-vs-pointers.html
This only covers one aspect though (pools indexed by 'generation-counted-index-handles' to solve temporal memory safety - e.g. a runtime solution for use-after-free).
learning Python first is same difficulty as learning C first (because main problem is the whole concept of programming)
and learning C after Python is harder than learning Python after C (because of pointers)
I also dislike RAII because it often makes it difficult to reason about when destructors are run and also admits accidental leaks just like defer does. Instead what I would want is essentially a linear type system in the compiler that allows one to annotate data structures that require cleanup and errors if any possible branches fail to execute the cleanup. This has the benefit of making cleanup explicit while also guaranteeing that it happens.
About RAII, I think your viewpoint is quite baffling. Destructors are run at one extremely well-defined point in the code: `}`. That's not hard to reason about at all. Especially not compared to often spaghetti-like cleanup tails. If you're lucky, the team does not have a policy against `goto`.
Indeed, `defer` as a language feature is an anti-pattern.
It does not allow the abstraction of initialization/de-initialization routines and encapsulating their execution within the resources, transferring the responsibility to manually perform the release or de-initialization to the users of the resources - for each use of the resource.
> I also dislike RAII because it often makes it difficult to reason about when destructors are run [..]
RAII is a way to abstract initialization, it says nothing about where a resource is initialized.
When combined with stack allocation, now you have something that gives you precise points of construction/destruction.
The same can be said about heap allocation in some sense, though this tends to be more manual and could also involve some dynamic component (ie, a tracing collector).
> [..] and also admits accidental leaks just like defer does.
RAII is not memory management, it's an initialization discipline.
> [..] what I would want is essentially a linear type system in the compiler that allows one to annotate data structures that require cleanup and errors if any possible branches fail to execute the cleanup. This has the benefit of making cleanup explicit while also guaranteeing that it happens.
Why would you want to replicate the same cleanup procedure for a certain resource throughout the code-base, instead of abstracting it in the resource itself?
Abstraction and explicitness can co-exist. One does not rule out the other.
I would not introduce zig’s errdeferr though. That one would need additional semantics changes in C to express errors.
It starts out small. Then before you know the language is total shit. Python is a good example.
I am observing a very distinguishable phenomenon when internet makes very shallow ideas mainstream and ruin many many good things that stood the test of time.
I am not saying this is one of those instances, but what the parent comment makes sense to me. You can see another comment who now wants to go further and want destructors in C. Because of internet, such voices can now reach out to each other, gather and cause a change. But before, such voices would have to go through a lot of sensible heads before they would be able to reach each other. In other words, bad ideas got snuffed early before internet, but now they go mainstream easily.
So you see, it starts out slow, but then more and more stuff gets added which diverges more and more from the point.
Actually I am not sure I do. It seems to me that even though `defer` is more explicit than destructors, it still falls under "spooky action at a distance" category.
The former would be bring so much in C that it wouldn't be C anymore.
And if your point is "you should switch to C++ to get destructors", then it seems out of topic. By very definition, if we're talking about language X and your answer is "switch to Y", this is an entirely different subject, of very few interest to people programming in X.
But the point is `defer` is still in "spooky action at a distance" category that I generally don't want in programming languages, especially in c.
1. goto, which is "spooky action at a distance" to the nth degree. It's not even safe, you can goto anywhere, even out of scope.
2. cleanup attributes, which are not standard.
Agree, this is also why I'm a bit weary of it.
What brings me on the "pro" side is that, defer or not defer, there will need to be some kind of cleanup anyway. It's just a matter of where it is declared, and close to the acquisition is arguably better.
The caveat IMHO is that if a codebase is not consistent in its use, it could be worst.
With respect, that sounds a bit nuts. It's been 37 years since C89; unless you're targeting computers that still have floppy drives, why give up on so many convenience features? Binary prefixes (0b), #embed, defined-width integer types, more flexibility with placing labels, static_assert for compile-time sanity checks, inline functions, declarations wherever you want, complex number support, designated initializers, countless other things that make code easier to write and to read.
Defer falls in roughly the same category. It doesn't add a whole lot of complexity, it's just a small convenience feature that doesn't add any runtime overhead.
The one huge advantage of C is its ubiquity - you can use it on the latest shiny computer / OS / compiler as well as some obscure embedded platform with a compiler that hasn't been updated since 2002. (That's a rare enough situation to be unimportant, right? /laughs in industrial control gear.)
I'm wary of anything which fragments the language and makes it inaccessible to subsections of its traditional strongholds.
While I'm not a huge fan of the "just use Rust" attitude that crops up so often these days, you could certainly make an argument that if you want modern language features you should be using a more modern language.
(And, for the record, I do still write software - albeit recreationally - for computers that have floppy drives.)
First, for all platforms supported by mainstream compilers, which includes most of embedded systems (from 8 bit to 64 bit), this is not really a concern. You're cross-compiling on your desktop anyway. You'd need to deliberately install and use gcc from the early 1990s, but no one is forcing you. Are you routinely developing software for systems so niche that they aren't even supported by gcc?
But second, the code you write for the desktop is almost never the code you're gonna run in these environments, so why limit yourself across the board? In most embedded environments, especially legacy ones, you won't even have standard libc.
You're missing out on one of the best-integrated and useful features that have been added to a language as an afterthought (C99 designated initialization). Even many moden languages (e.g. Rust, Zig, C++20) don't get close when it comes to data initialization.
E.g. neither Rust, Zig nor C++20 can do this:
https://github.com/floooh/sokol-samples/blob/51f5a694f614253...
Odin gets really close but can't chain initializers (which is ok though):
https://github.com/floooh/sokol-odin/blob/d0c98fff9631946c11...
As far as I can tell, Rust can do what it is in your example (which different syntax of course) except for this particular way of initializing an array.
To me, that seems like a pretty minor syntax issue to that could be added to Rust if there would be a lot of demand for initializing arrays this way.
E.g. notice how here in Rust each nested struct needs a type annotation, even though the compiler could trivially infer the type. Rust also cannot initialize arrays with random access directly, it needs to go through an expression block. Finally Rust requires `..Default::default()`:
https://github.com/floooh/sokol-rust/blob/f824cd740d2ac96691...
Zig has most of the same issues as Rust, but at least the compiler is able to infer the nested struct types via `.{ }`:
https://github.com/floooh/sokol-zig/blob/17beeab59a64b12c307...
I don't have C++ code around, but compared to C99 it has the following restrictions:
- designators must appear in order (a no-go for any non-trivial struct)
- cannot chain designators (e.g. `.a.b.c = 123`)
- doesn't have the random array access syntax via `[index]`
> ...like a pretty minor syntax issue...
...sure, each language only has a handful minor syntax issues, but these papercuts add up to a lot of syntax noise to sift through when compared to the equivalent C99 code.
I have to admit, the ..Default::default() syntax is pretty ugly.
In theory Rust could do "let x: Foo = _ { field }" and "Foo { field: _ { bar: 1 } }". That doesn't even change the syntax. Its just whether enough people care.
I think defer{} can simplify these flows sometimes, so it can indeed be useful for good old style C.
If you want to write C++, write C++. If you want to write C, but want resource cleanup to be a bit nicer and more standard than __attribute__((cleanup)), use C with defer. The two are not comparable.
Goto approach also covers some more complicated cases
Is that supposed to exacerbate how poor that choice is. External assembly is great.
You can just look at the code in front of you to see what defer is doing. With destructors, you need to know what type you have (not always easy to tell), then find its destructor, and all the destructors of its parent classes, to work out what's going to happen.
Sure, if the situation arises frequently, it's nice to be able to design a type that "just works" in C++. But if you need to clean up reliably in just this one place, C++ destructors are a very clunky solution.
> With destructors, you need to know what type you have (not always easy to tell), then find its destructor, and all the destructors of its parent classes, to work out what's going to happen
Isn't it a code quality issue? It should be clear from class name/description what can happen in its destructor. And if it's not clear, it's not that relevant.
The classical case of 'one destructor per class' would require to design the entire code base around classes which comes with plenty of downsides.
> Anyone who writes C should consider using C++ instead
Nah thanks, been there, done that. Switching back to C from C++ about 9 years ago was one of my better decisions in life ;)
It can. An object with destructor doing clean-up should be created only after such clean-up is needed. In case of a file, for example, a file object should be created at file opening, so that it can close the file in its destructor.
The decision would be easier if the C subset in C++ would be compatible with modern C standards instead of being a non-standard dialect of C stuck in ca. 1995.
Defer takes 10 lines to implement in C++. [1]
You don't have to wait 50 years for a committee to introduce basic convenience features, and you don't have to use non-portable extensions until they do (and in this case the __attribute__((cleanup)) has no equivalent in MSVC), if you use a remotely extensible language.
[1] https://www.gingerbill.org/article/2015/08/19/defer-in-cpp/
My comment is targeted towards the programmer who is excited about features like this - you can add an extra two characters to your filename and trivially implement those improvements (and more) yourself, without any alterations to your codebase or day to day programming style.
C++ doesn't let you implicitly cast from void* to other pointer types. This breaks the way you typically heap-allocate variables in C: instead of 'mytype *foo = malloc(sizeof(*foo))', you have to write 'mytype *foo = (mytype *)malloc(sizeof(*foo))'. This adds a non-trivial amount of friction to something you do every handful of lines.
String literals in C++ are 'const char *' instead of 'char *'. While this is more technically correct, it means you have to add casts to 'char *' all over the place. Plenty of C APIs are really not very ergonomic when string literals aren't 'char *'.
And the big one: C++ doesn't have C's struct literals. Lots of C APIs are super ergonomic if you call them like this:
some_function(&(struct params) {
.some_param = 10,
.other_param = 20,
});
You can't do that in C++. C++ has other features (such as its own, different kind of aggregate initialization with designated initializers, and the ability for temporaries to be treated as const references) which make C++ APIs nice to work with from C++, but C APIs based around the assumption that you'll be using struct literals aren't nice to use from C++.If you wanna use C, use C. C is a much better C than C++ is.
I like both these syntax-es (syntacies? synticies?) and I hope they make their way to C++, but if we're honestly evaluating features -- take their utility, multiply it by 100 and in my book it's still losing out against either defer or slices/span types.
If you disagree with this, then your calculus for feature utility is completely different than mine. I suspect for most people that's not the case, and most of the time the reason to pick C over C++ is ideological, not because of these 2 pieces of syntax sugar.
"I like it" is a good enough reason to use something, my original comment is to the person who wants to use C but gets excited about features like this - I don't know how many of those people are aware of how trivially most of these things can be accomplished in C++.
Both RAII and `defer` have proven to be highly useful in real-world code. This seems like a good addition to the C language that I hope makes it into the standard.
By the way, GCC and Clang have attribute((cleanup)) (which is the same, scope-based clean-up) and have done for over a decade, and this is widely used in open source projects now.
Sometimes we need block scoped cleanup, other times we need the function one.
You can turn the function scoped defer into a block scoped defer in a function literal.
AFAICT, you cannot turn a block scoped defer into the function one.
So I think the choice was obvious - go with the more general(izable) variant. Picking the alternative, which can do only half of the job, would be IMO a mistake.
You kinda-sorta can by creating an array/vector/slice/etc. of thunks (?) in the outer scope and then `defer`ing iterating through/invoking those.
I hate even more that you can call defer in a loop, and it will appear to work, as long as the loop has relatively few iterations, and is just silently massively wasteful.
In a tight loop you'd want your cleanup to happen after the fact. And in, say, an IO loop, you're going to want concurrency anyway, which necessarily introduces new function scope.
Why? Doing 10 000 iterations where each iteration allocates and operates a resource, then later going through and freeing those 10 000 resources, is not better than doing 10 000 iterations where each iteration allocates a resource, operates on it, and frees it. You just waste more resources.
> And in, say, an IO loop, you're going to want concurrency anyway
This is not necessarily true; not everything is so performance sensitive that you want to add the significant complexity of doing it async. Often, a simple loop where each iteration opens a file, reads stuff from it, and closes it, is more than good enough.
Say you have a folder with a bunch of data files you need to work on. Maybe the work you do per file is significant and easily parallelizable; you would probably want to iterate through the files one by one and process each file with all your cores. There are even situations where the output of working on one file becomes part of the input for work on the next file.
Anyway, I will concede that all of this is sort of an edge case which doesn't come up that often. But why should the obvious way be the wrong way? Block-scoped defer is the most obvious solution since variable lifetimes are naturally block-scoped; what's the argument for why it ought to be different?
And to draw a surface to the screen, you need to create an SDL texture from it. If that's all you want to do, you can then destroy the SDL surface.
So you could imagine code like this:
Oops, you're now using way more memory than you need!And what would be the benefit? You save up to one malloc and free per string you want to render, but text rendering is so demanding it completely drowns out the cost of one allocation.
If you were dynamically changing the malloc allocation size on each iteration then you have a case for a growable buffer to do the same, but in that case you would already have all the complexity of which you speak as required to support a dynamically-sized malloc.
Granted, you could do a pre-pass to find the largest string and allocate enough memory for that once, then use that buffer throughout the loop.
But again, what do you gain from that complexity?
Impossible without knowing how much to allocate, which you indicate would require adding a bunch of complexity. However, I am willing to chalk that up to being a typo. Given that we are now calculating how much to allocate on each iteration, where is the meaningful complexity? I see almost no difference between:
and> Impossible without knowing how much to allocate
But we do know how much to allocate? The implementation of this example's RenderTextToSurface function would use SDL functions to measure the text, then allocate an SDL_Surface large enough, then draw to that surface.
> I see almost no difference between: (code example) and (code example)
What? Those two code examples aren't even in the same language as the code I showed.
The difference would be between the example I gave earlier:
and: Remember, I'm talking about the API to a Go wrapper around SDL. How the C code would've looked if you wrote it in C is pretty much irrelevant.I have to ask again though, since you ignored me the first time: what do you gain? Text rendering is really really slow compared to memory allocation.
We were talking about using malloc/free vs. a resizable buffer. Happy to progress the discussion towards a Go API, however. That, obviously, is going to look something more like this:
I have no idea why you think it would look like that monstrosity you came up with.No. This is a conversation about Go. My example[1], that you responded to, was an example taken from a real-world project I've worked on which uses Go wrappers around SDL functions to render text. Nowhere did I mention malloc or free, you brought those up.
The code you gave this time is literally my first example (again, [1]), which allocates a new surface every time, except that you forgot to destroy the surface. Good job.
Can this conversation be over now?
[1] https://news.ycombinator.com/item?id=47088409
To anyone used to SDL, your proposed API is extremely surprising.
It would've made your point clearer if you'd explained this coupling between SDL_Renderer and text rendering in your original post.
But yes, I concede that if there was any reason to do so, putting a scratch surface into your SDL_Renderer that you can auto-resize and render text to would be a solution that makes for slightly nicer API design. Your SDL_Renderer now needs to be passed around as a parameter to stuff which only ought to need to concern itself with CPU rendering, and you now need to deal with mutexes if you have multiple goroutines rendering text, but those would've been alright trade-offs -- again, if there was a reason to do so. But there's not; the allocation is fast and the text rendering is slow.
Aside from my failure in dealing with the hardest problem in computer science, how would you improve the intent of the API? It is clearly improved over the original version, but we would do well to iterate towards something even better.
But that 0.1 millisecond to render a string is an eternity compared to the time it takes to allocate some memory, which might be on the order of single digit microseconds. Saving a microsecond from a process which takes 0.1 milliseconds isn't noticeable.
It might be preferable to create a font atlas and just allocate printable ASCII characters as a spritesheet (a single SDL_Texture* reference and an array of rects.) Rather than allocating a texture for each string, you just iterate the string and blit the characters, no new allocations necessary.
If you need something more complex, with kerning and the like, the current version of SDL_TTF can create font atlases for various backends.
P.S. If you have enough files you don't want to try to open them all at once — Go will start creating more and more threads to handle the "blocked" syscalls (open(2) in this case), and you can run out of 10,000 threads too
If you have a legitimate reason for doing something unusual, it is fine to have to use the tools unusually. It serves as a useful reminder that you are purposefully doing something unusual rather than simply making a bad design choice. A good language makes bad design decisions painful.
What you propose is not a bad solution, but don't come here and pretend it's the only reasonable solution for almost all situations. It's not. Sometimes, you want each work item to be a list of files, if processing one file is fast enough for synchronisation overhead to be significant. Often, you don't have to care so much about the wall clock time your loop takes and it's fast enough to just do sequentially. Sometimes, you're implementing a non-important background task where you intentionally want to only bother one core. None of these are super unusual situations.
It is telling that you keep insisting that any solution that's not a one-file-per-work-item work queue is super strange and should be punished by the language's design, when you haven't even responded to my core argument that: sometimes sequential is fast enough.
Keep insisting? What do you mean by that?
> when you haven't even responded to my core argument that: sometimes sequential is fast enough.
That stands to reason. I wasn't responding to you. The above comment was in reply to nasretdinov.
I did read your code, but it is not clear where the worker queue is. It looks like it ranges over (presumably) a channel of filenames, which is not meaningfully different than ranging over a slice of filenames. That is the original, non-concurrent solution, more or less.
Now we know why it was painful. What is interesting here is that the pain wasn't noticed as a signal that the design was off. I wonder why?
We should dive into that topic. I suspect at the heart of it lies why there is so much general dislike for Go as a language, with it being far less forgiving to poor choices than a lot of other popular languages.
But one does still have to be mindful if they want to write software productively. Using a "super duper generic and extensible" solution means that things like error propagation is already solved for you. Your code, on the other hand, is going to quickly become a mess once you start adding all that extra machinery. It didn't go unnoticed that you conveniently left that out.
Maybe that no longer matters with LLMs, when you don't even have to look the code and producing it is effectively free, but LLMs these days also understand how defer works so then this whole thing becomes moot.
In Golang if you iterate over a thousand files and
your OS will run out of file descriptorsSeriously, why is default ulimit on file descriptors on Linux measly 1024?
I think that defer is actually limited in ways that are good - I don't see it introducing surprising control flow in the same way.
> RAII has also proven to be quite harmful in cases
The downsides of defer are much worse than the "downsides" of RAII. Defer is manual and error-prone, something that you have to remember to do every single time.
`defer` is obviously not implemented in this way, it will re-order the code to flow top-to-bottom and have fewer branches, but the control flow is effectively the same thing.
In theory a compiler could implement `comefrom` by re-ordering the basic blocks like `defer` does, so that the actual runtime evaluation of code is still top-to-bottom.
It allows library authors to take responsibility for cleaning up resources in exactly one place rather than forcing library users to insert a defer call in every single place the library is used.
The extra braces appear to be optional according to the examples in https://www.open-std.org/JTC1/SC22/WG14/www/docs/n3734.pdf (see pages 13-14)
But yeah, RAII can only provide deterministic destruction because resource acquisition is initialization. As long as resource acquisition is decoupled from initialization, you need to manually track whether a variable has been initialized or not, and make sure to only call a destruction function (be that by putting free() before a return or through 'defer my_type_destroy(my_var)') in the paths where you know that your variable is initialized.
So "A limited form of RAII" is probably the wrong way to think about it.
Which removes half the value of RAII as I see it—needing when and to know how to unacquire the resource is half the battle, a burden that using RAII removes.
Of course, calling code as the scope exits is still useful. It just seems silly to call it any form of RAII.
It's not uncommon that I encounter a bug when running some code on new hardware or a new architecture or a new compiler for the first time because the code assumed that an integer member of a class would be 0 right after initialization and that happened to be true before. ASan helps here, but it's not trivial to run in all embedded contexts (and it's completely out of the question on MCUs).
It's been some since I have used C++, but as far as I understand it RAII is primarily about controlling leaks, rather than strictly defined state (even if the name would imply that) once the constructor runs. The core idea is that if resource allocations are condensed in constructors then destructors gracefully handle deallocations, and as long you don't forget about the object (_ptr helpers help here) the destructors get called and you don't leak resources. You may end up with a bunch of FooManager wrapper classes if acquisition can fail (throw), though. So yes, I agree with your GP comment, it's the deterministic destruction that is the power of RAII.
On the other hand, what you refer to in this* comment and what parent hints at with "When implemented properly" is what I have heard referred to (non English) type totality. Think AbstractFoo vs ConcreteFoo, but used not only for abstracting state and behavior in class hierarchy, but rather to ensure that objects are total. Imagine, dunno, database connection. You create some AbstractDBConnection (bad name), which holds some config data, then the open() method returns OpenDBCOnnection() object. In this case Abstract does not even need to call close() and the total object can safely call close() in the destructor. Maybe not the best example. This avoids resources that are in an undefined state.
Anyway, I think this could be fixed, if we wanted to. C just describes the objects as being uninitialized and has a bunch of UB around uninitialized objects. Nothing in C says that an implementation can't make every uninitialized object 0. As such, it would not harm C interoperability if C++ just declared that all variable declarations initialize variables to their zero value unless the declaration initializes it to something else.
I got out my 4e Stroustrup book and checked the index, RAII only comes up when discussing resource management.
Interestingly, the verbatim introduction to RAII given is:
> ... RAII allows us to eliminate "naked new operations," that is, to avoid allocations in general code and keep them buried inside the implementation of well-behaved abstractions. Similarly "naked delete" operations should be avoided. Avoiding naked new and naked delete makes code far less error-prone and far easier to keep free of resource leaks
From the embedded standpoint, and after working with Zig a bit, I'm not convinced about that last line. Hiding heap allocations seems like it make it harder to avoid resource leaks!