Why is the first C++ (m)allocation always 72 KB?
119 points by joelsiks 15 hours ago | 26 comments

pjmlp 10 hours ago
This is compiler specific and cannot be generalised as C++.
reply
zabzonk 9 hours ago
Well, yes, but still quite interesting, IMHO. It's not like GCC is one of the least used compilers.
reply
pjmlp 8 hours ago
Yeah, but that isn't C++ in isolation, thus the tile is incorrect.
reply
appreciatorBus 7 hours ago
Article has been updated

> EDIT (March 1, 2026): I updated the title to to clarify that this observation is specific to my environment. The original title may have implied a universal behavior, which isn’t the case. Thanks for the feedback!

reply
pjmlp 3 hours ago
Thanks for the update!
reply
surajrmal 3 hours ago
I would also expect it to depend on whether or not you have exceptions enabled. Half the ecosystem has them disabled.
reply
compiler-guy 9 hours ago
And C++ library specific as well. Perhaps even more so.
reply
jebarker 6 hours ago
Reading this was a good reminder not to be intimidated by assumptions about complexity. (Without giving it much thought) I would have assumed that it would be hard to replace malloc for such fundamental applications as ls, but it's surprisingly simple.
reply
syncsynchalt 5 hours ago
There's usually an easy-ish way to override malloc/calloc/realloc/free on Unix, as it's very useful to do when debugging issues or just to collect allocation metrics.

In ELF objects (i.e. on Linux) this is usually done with the "Weak" symbol binding. This is an optional flag for symbols in ELF format that let you override a symbol by providing a competing non-weak symbol, which the linker will prefer when there is a conflict. https://en.wikipedia.org/wiki/Weak_symbol

You can see the list of Weak symbols by looking for a 'W' in the output of `nm` on linux hosts.

reply
userbinator 2 hours ago
If you started learning from the "bottom-up", you wouldn't think it's intimidating. Fortunately, it's never too late to start learning.
reply
ozgrakkurt 2 hours ago
This applies to a lot of things unfortunately. There is a cult of just being afraid and scaring other people.

"You can't do it, just use a library.". "Just use this library, everyone uses it.". "Even google uses this library, do you think you are better." etc.

To add another example to this, you will read that memcpy is super mega optimized on libc and you shouldn't do it yourself etc. etc. etc.

But if you just check clickhouse [1] as an example. They implemented it, it is pretty basic and they say it works well in the comments of the code.

Also you can check musl libc code etc. and it is fairly simple.

People still would argue that you used some intrinsic so it isn't portable or you just benchmarked on one case so it won't work well overall.

Well you CAN benchmark as wide as you want inside your project and have a different memcpy code per project. This kind of thing isn't as bad as people make it out to be in my opinion.

Ofc memcpy is just an example here and it applies similarly to memory allocation, io etc.

As a negative note, imo this is one of the major reasons why most software is super crappy now. Everything uses some library -> those libraries change all the time -> more breaking -> more maintenance. Similar chain happens in terms of performance because the person that wrote that library probably doesn't even know how I am using the library.

This is also why people have endless arguments about what library/tool to use while they can be learning more and more things every day.

[1] https://github.com/ClickHouse/ClickHouse/blob/master/base/gl...

reply
Joker_vD 11 hours ago
Huh. Why is this emergency pool not statically allocated? Is it possible to tune the size of this pool on libc++ startup somehow? Because otherwise it absolutely should've been statically allocated.
reply
joelsiks 11 hours ago
I did mention it briefly in the post, but you can opt-in for a fixed-size statically allocated buffer by configuring libstdc++ with --enable-libstdcxx-static-eh-pool. Also, you can opt-out of the pool entirely by configuring the number of objects in the pool to zero with the environment variable GLIBCXX_TUNABLES=glibcxx.eh_pool.obj_count=0.
reply
ninkendo 9 hours ago
I wonder why it’s opt-in. Maybe it’s part of the whole “you only pay for what you use” ethos, i.e. you shouldn’t have to pay the cost for a static emergency pool if you don’t even use dynamic memory allocation.
reply
throwaway2037 12 hours ago
I would like the see the source code for libmymalloc.so, however, I don't see anything in the blog post. Nor do I see anything in his GitHub profile: https://github.com/jsikstro

Also, I cannot find his email address anywhere (to ask him to share it on GitHub).

Am I missing something?

reply
joelsiks 12 hours ago
The exact implementation of mymalloc isn't relevant to the post. I have an old allocator published at https://github.com/joelsiks/jsmalloc that I did as part of my Master's thesis, which uses a similar debug-logging mechanism that is described in the post.
reply
nly 12 hours ago
dlsym() with the RTLD_NEXT flag basically:

https://catonmat.net/simple-ld-preload-tutorial-part-two

There's actually a better way to hook GNUs malloc:

https://www.man7.org/linux/man-pages/man3/malloc_hook.3.html

This is better because you can disable the hook inside the callback, and therefore use malloc within your malloc hook (no recursion)

But you can't use this mechanism before main()

reply
Joker_vD 11 hours ago

    The use of these hook functions is not safe in multithreaded
    programs, and they are now deprecated.  From glibc 2.24 onwards,
    the __malloc_initialize_hook variable has been removed from the
    API, and from glibc 2.34 onwards, all the hook variables have been
    removed from the API.  Programmers should instead preempt calls to
    the relevant functions by defining and exporting malloc(), free(),
    realloc(), and calloc().
reply
nly 10 hours ago
Yeah. Shame though because it gave you the option to control exactly when you hooked and didn't hook, which let stop and start debugging allocations based on arbitrary triggers.

The global variable approach was very useful and pretty low overhead.

reply
fweimer 7 hours ago
You can still override malloc and call __libc_malloc if you do not want to bother with dlsym/RTLD_NEXT. These function aliases are undocumented, but for a quick experiment, that shouldn't matter.
reply
jeffbee 8 hours ago
If you only wanted to observe the behavior the post is discussing, it seems like `ltrace -e malloc` is a lot easier.
reply
anonymousiam 3 hours ago
So basically, before any of the code even runs, this environment begins by gobbling up more than the total RAM that most of my first computers had (SYM-1, IAMSAI-8080, Ferguson Big Board, Kaypro II, and CCS S-100 Z-80). All of these systems were 8-bit, with various RAM sizes from 8KB to 64KB. That was the maximum RAM available, and it was shared by the OS and the applications.
reply
surajrmal 3 hours ago
What's the purpose of making such a comparison? The implication is that we're being wasteful, but I'm not certain that's the point you're trying to make.
reply
cendyne 3 hours ago
This was a fun little share. Thanks for writing it up!
reply
aliveintucson 9 hours ago
I think you should read up on what "always" means.
reply
znpy 8 hours ago
> TLDR; The C++ standard library sets up exception handling infrastructure early on, allocating memory for an “emergency pool” to be able to allocate memory for exceptions in case malloc ever runs out of memory.

Reminds me of Perl's $^M: https://perldoc.perl.org/variables/$%5EM

In Perl you can "hand-manage" that. This line would allocate a 64K buffer for use in an emergency:

    $^M = 'a' x (1 << 16);
reply