There needs to be more competition in the malloc space. Between various huge page sizes and transparent huge pages, there are a lot of gains to be had over what you get from a default GNU libc.
Our results from July 2025:
rows are <allocator>: <RSS>, <time spent for allocator operations>
app1:
glibc: 215,580 KB, 133 ms
mimalloc 2.1.7: 144,092 KB, 91 ms
mimalloc 2.2.4: 173,240 KB, 280 ms
tcmalloc: 138,496 KB, 96 ms
jemalloc: 147,408 KB, 92 ms
app2, bench1
glibc: 1,165,000 KB, 1.4 s
mimalloc 2.1.7: 1,072,000 KB, 5.1 s
mimalloc 2.2.4:
tcmalloc: 1,023,000 KB, 530 ms
app2, bench2
glibc: 1,190,224 KB, 1.5 s
mimalloc 2.1.7: 1,128,328 KB, 5.3 s
mimalloc 2.2.4: 1,657,600 KB, 3.7 s
tcmalloc: 1,045,968 KB, 640 ms
jemalloc: 1,210,000 KB, 1.1 s
app3
glibc: 284,616 KB, 440 ms
mimalloc 2.1.7: 246,216 KB, 250 ms
mimalloc 2.2.4: 325,184 KB, 290 ms
tcmalloc: 178,688 KB, 200 ms
jemalloc: 264,688 KB, 230 ms
tcmalloc was from github.com/google/tcmalloc/tree/24b3f29.i don't recall which jemalloc was tested.
tcmalloc (thread caching malloc) assumes memory allocations have good thread locality. This is often a double win (less false sharing of cache lines, and most allocations hit thread-local data structures in the allocator).
Multithreaded async systems destroy that locality, so it constantly has to run through the exception case: A allocated a buffer, went async, the request wakes up on thread B, which frees the buffer, and has to synchronize with A to give it back.
Are you using async rust, or sync rust?
[0]: https://github.com/google/tcmalloc/blob/master/docs/design.m...
tokio scheduler side, the executor is thread per core and work stealing of in progress tasks shouldn't be happening too much.
for all thread pool threads or threads unaffiliated with the executor, see earlier speculation on OS scheduler behavior.
Indeed, it's not the old gperftools version.
Blog: https://abseil.io/blog/20200212-tcmalloc
History / Diffs: https://google.github.io/tcmalloc/gperftools.html
1. tcmalloc is actually the only allocator I tested which was not using thread local caches. even glibc malloc has tcache.
2. async executors typically shouldn’t have tasks jumping willy nilly between threads. i see the issue u describe more often with the use of thread pools (like rayon or tokio’s spawn_blocking). i’d argue that the use of thread pools isn’t necessarily an inherent feature of async executors. certainly tokio relies on its threadpool for fs operations, but io-uring (for example) makes that mostly unnecessary.
Edit: I see mimalloc v3 is out – I missed that! That probably moots this discussion altogether.
Even toolchains like Turbo Pascal for MS-DOS, had an API to customise the memory allocator.
The one size fits all was never a solution.
(99% of the time, I find this less problematic than Java’s approach, fwiw).
I heard that was a common complaint for minecraft
decade seems a usual timescale for that, considering f.e. python 2->3
To an outsider, that looks like the JVM heap just steadily growing, which is easy to mistake for a memory leak.
This feels like a huge understatement. I still have some PTSD around when I did Java professionally between like 2005 and 2014.
The early part of that was particularly horrible.
Baring bugs/native leaks - Java has a very predictable memory allocation.
we are talking about DEallocation
"To an outsider, that looks like the JVM heap just steadily growing, which is easy to mistake for a memory leak."
I cut the part that it's possible to make JVM return memory heap after compaction but usually it's not done, i.e. if something grew once, it's likely to do it again.
The issue is that through the standard course of a JVM application running, every allocated page will ultimately be touched. The JVM fills up new gen, runs a minor collection, moves old objects to old gen, and continues until old gen gets filled. When old gen is filled, a major collection is triggered and all the live objects get moved around in memory.
This natural action of the JVM means you'll see a sawtooth of used memory in a properly running JVM where the peak of the sawtooth occasionally hits the memory maximum, which in turn causes the used memory to plummet.
There's a lot of bad tuning guides for minecraft that should be completely ignored and thrown in the trash. The only GC setting you need for it is `-XX:+UseZGC`
For example, a number of the minecraft golden guides I've seen will suggest things like setting pause targets but also survivor space sizes. The thing is, the pause target is disabled when you start playing with survivor space sizes.
It was a better idea when Java had the old mark and sweep collector. However, with the generational collectors (which are all Java collectors now. except for epsilon) it's more problematic. Reusing buffers and objects in those buffers will pretty much guarantees that buffer ends up in oldgen. That means to clear it out, the VM has to do more expensive collections.
The actual allocation time for most of Java's collectors is almost 0, it's a capacity check and a pointer bump in most circumstances. Giving the JVM more memory will generally solve issues with memory pressure and GC times. That's (generally) a better solution to performance problems vs doing the large buffer.
Now, that said, there certainly have been times where allocation pressure is a major problem and removing the allocation is the solution. In particular, I've found boxing to often be a major cause of performance problems.
For example, some code I had to clean up pretty early on in my career was a dev, for unknown reasons, reinventing the `ArrayList` and then using that invention as a set (doing deduplication by iterating over the elements and checking for duplicates). It was done in the name of performance, but it was never a slow part of the code. I replaced the whole thing with a `HashSet` and saved ~300 loc as a result.
This individual did that sort of stuff all over the code base.
Heap allocation in java is something trivial happens constantly. People typically do funky stuff with memory allocation because they have to, because the GC is causing pauses.
People avoid system allocators in C++ too, they just don't have to do it because of uncontrollable pauses.
This same dev did things like putting what he deemed as being large objects (icons) into weak references to save memory. When the references were collected, invariably they had to be reloaded.
That was not the source of memory pressure issues in the app.
I've developed a mistrust for a lot of devs "doing it because we have to" when it comes to performance tweaks. It's not a never thing that a buffer is the right thing to do, but it's not been something I had to reach for to solve GC pressure issues. Often times, far more simple solutions like pulling an allocation out of the middle of a loop, or switching from boxed types to primatives, was all that was needed to relieve memory pressure.
The closest I've come to it is replacing code which would do an expensive and allocation heavy calculation with a field that caches the result of that calculation on the first call.
Any program that cares about performance is going to focus on minimizing memory allocation first. The difference between a GCed language like java is that the problems manifest as gc pauses that may or may not be predictable. In a language like C++ you can skip the pauses and worry about the overall throughput.
> Many programs in GC language end up fighting the GC by allocating a large buffer and managing it by hand
That's the primary thing I'm contending with. This is a strategy for fighting the GC, but it's also generally a bad strategy. One that I think gets pulled more because someone heard of the suggestion and less because it's a good way to make things faster.
That guy I'm talking about did a lot of "performance optimizations" based on gut feelings and not data. I've observed that a lot of engineers operate that way.
But I've further observed that when it comes to optimizing for the GC, a large amount of problems don't need such an extreme measure like building your own memory buffer and managing it directly. In fact, that sort of a measure is generally counter productive in a GC environment as it makes major collections more costly. It isn't a "never do this" thing, but it's also not something that "many programs" should be doing.
I agree that many programs with a GC will probably need to change their algorithms to minimize allocations. I disagree that "allocating a large buffer and managing it by hand" is a technique that almost any program or library needs to engage in to minimize GCs.
Allocating a large buffer is literally what an array or vector is. A heap uses a heap structure and hops around in memory for every allocation and free. It gets worse the more allocations there are. The allocations are fragmented and in different parts of memory.
Allocating a large buffer takes care of all this if it is possible to anything else. It doesn't make sense to make lots of heap allocations when what you want is multiple items next to each other in memory and one heap allocation.
That guy I'm talking about did a lot of "performance optimizations" based on gut feelings and not data.
You need to let this go, that guy has nothing to do with what works when optimizing memory usage and allocation.
But I've further observed that when it comes to optimizing for the GC, a large amount of problems don't need such an extreme measure like building your own memory buffer and managing it directly.
Making an array of contiguous items is not an "extreme strategy", it's the most efficient and simplest way for a program to run. Other memory allocations can just be an extension of this.
I agree that many programs with a GC will probably need to change their algorithms to minimize allocations. I disagree that "allocating a large buffer and managing it by hand"
If you need the same amount of memory but need to minimize allocations how do you think that is done? You make larger allocations and split them up. You keep saying "managing it by hand" as if there is something that has to be tricky or difficult. Using indices of an array is not difficult and neither is handing out indices or ranges to in small sections.
Not in the JVM. And maybe this is ultimately what we are butting up against. After all, the JVM isn't all GCed languages, it's just one of many.
In the JVM, heap allocations are done via bump allocation. When a region is filled, the JVM performs a garbage collection which moves objects in the heap (it compacts the memory). It's not an actual heap structure for the JVM.
> It doesn't make sense to make lots of heap allocations when what you want is multiple items next to each other in memory and one heap allocation.
That is (currently) not possible to do in the JVM, barring primitives. When I create a `new Foo[128]` in the JVM, that creates an array big enough to hold 128 references of Foo, not 128 Foo objects. Those have to be allocated onto the heap separately. This is part of the reason why managing such an object pool is pointless in the JVM. You have to make the allocations anyways and you are paying for the management cost of that pool.
The object pool is also particularly bad in the JVM because it stops the JVM from performing optimizations like scalarization. That's where the JVM can completely avoid a heap allocation all together and instead pulls out the internal fields of the allocated object to hand off to a calling function. In order for that optimization to occur, and object can't escape the current scope.
I get why this isn't the same story if you are talking about another language like C# or go. There are still the negative consequences of needing to manage the buffer, especially if the intent is to track allocations of items in the buffer and to reassign them. But there is a gain in the locality that's nice.
> Using indices of an array is not difficult and neither is handing out indices or ranges to in small sections.
Easy to do? Sure. Easy to do fast? Well, no. That's entirely the reason why C++ has multiple allocators. It's the crux of the problem an allocator is trying to solve in the first place "How can I efficiently give a chunk of memory back to the application".
Obviously, it'll matter what your usage pattern is, but if it's at all complex, you'll run into the same problems that the general allocator hits.
If that were true then they wouldn't be heap allocations.
https://www.digitalocean.com/community/tutorials/java-jvm-me...
https://docs.oracle.com/en/java/javase/21/core/heap-and-heap...
not possible to do in the JVM, barring primitives
Then you make data structures out of arrays of primitives.
Easy to do? Sure. Easy to do fast? Well, no. That's entirely the reason why C++ has multiple allocators.
I don't know what this means. Vectors are trivial and if you hand out ranges of memory in an arena allocator you allocate it once and free it once which solves the heavy allocation problem. The allocator parameter in templates don't factor in to this.
"Heap" is a misnomer. It's not called that due to the classic CS "heap" datastructure. It's called that for the same reason it's called a heap allocation in C++. Modern C++ allocators don't use a heap structure either.
How the JVM does allocations for all it's collectors is in fact a bump allocator in the heap space. There are some weedsy details (for example, threads in the JVM have their own heap space for doing allocation to avoid contention in allocation) but suffice it to say it ultimately translates into a region check then pointer bump. This is why the JVM is so fast at allocation, much faster than C++ can be. [1] [2]
> I don't know what this means.
JVM allocations are typically pointer bumps, adding a number to a register. There's really nothing faster than it. If you are implementing an arena then you've already lost in terms of performance.
[1] https://www.datadoghq.com/blog/understanding-java-gc/#memory...
If you got a web request, you could allocate a memory pool for it, then you would do all your memory allocations from that pool. And when your web request ended - either cleanly or with a hundred different kinds of errors, you could just free the entire pool.
it was nice and made an impression on me.
I think the lowly malloc probably has lots of interesting ways of growing and changing.
Yes, if you want to use huge pages with arbitrary alloc/free, then use a third-party malloc. If your alloc/free patterns are not arbitrary, you can do even better. We treat malloc as a magic black box but it's actually not very good.
https://jemalloc.net/jemalloc.3.html
One thing to call out: sdallocx integrates well with C++'s sized delete semantics: https://isocpp.org/files/papers/n3778.html
The nice thing about mimalloc is that there are a ton of configurable knobs available via env vars. I'm able to hand those 16 1 GiB pages to the program at launch via `MIMALLOC_RESERVE_HUGE_OS_PAGES=16`.
EDIT: after re-reading your comment a few times, I apologize if you already knew this (which it sounds like you did).
My old Intel CPU only has 4 slots for 1GB pages, and that was enough to get me about a 20% performance boost on Factorio. (I think a couple percent might have been allocator change but the boost from forcing huge pages was very significant)
Last time I checked mimalloc which was admittedly a while ago, probably 5 years, it was noticebly worse and I saw a lot of people on their github issues agreeing with me so I just never looked at it again.
Jemalloc can usually keep the smallest memory footprint, followed by tcmalloc.
Mimalloc can really speed things up sometimes.
As usually, YMMV.
Mimalloc made the claim that they were the fastest/best when they released and that didn't hold up to real world testing, so I am not inclined to trust it now.
That’s… ahistorical, at least so far as I remember. It wasn’t marketed as either of those; it was marketed as small/simple/consistent with an opt-in high-severity mode, and then its performance bore out as a result of the first set of target features/design goals. It was mainly pushed as easy to adopt, easy to use, easy to statically link, etc.
That is true of basically every single malloc replacement out there, that is not a uniquely defining feature.
Using the last workload tested as an example, mimalloc just consumed memory like crazy. It was probably leaking, as it was the stock version that comes in Debian, so probably quite old.
Tcmalloc and jemalloc were neck to neck when comparing app metrics (request duration etc... was quite similar), but jemalloc consistently used only about half of RAM as opposed to tcmalloc).
Both custom allocators used way less RAM than the stock allocator though. Something like 10x (!) less. In the end the workload with jemalloc hovers somewhere around 4% of the memory limit. Not bad for one single package and an additional compile option to enable it.
Jemalloc Postmortem - https://news.ycombinator.com/item?id=44264958 - June 2025 (233 comments)
Jemalloc Repositories Are Archived - https://news.ycombinator.com/item?id=44161128 - June 2025 (7 comments)
Most of the savings seemed to come from HVAC costs, followed by buying less computers and in turn less data centers. I'm sure these days saving memory is also a big deal but it doesn't seem to have been then.
The above was already the case 10 years ago, so LLMs are at most another factor added on.
In startups I've put more effort into squeezing blood from a stone for far less change; even if the change was proportionally more significant to the business. Sometimes it would be neat to say "something I did saved $X million dollars or saved Y kWh of energy" or whatever.
At most... Think 10x rather than 0.1x or 1x.
they've been using jemalloc (and employing "je") since 2009.
For example they use AsmJit in a lot of projects (both internal and open-source) and it's now unmaintained because of funding issues. Maybe they have now internal forks too.
I'm saddened that the job market in Australia is largely React CRUD applications and that it's unlikely I will find a role that lets me leverage my niche skill set (which is also my hobby)
Link in bio.
I applied for both and got ghosted, haha.
I also saw a government role as a security researcher. Involves reverse engineering, ghidra and that sort of thing. Super awesome - but the pay is extremely uncompetitive. Such a shame.
Other than that, the most interesting roles are in finance (like HFT) - where you need to juggle memory allocations, threads and use C++ (hoping I can pitch Rust but unlikely).
Sadly they have a reputation of having pretty rough cultures, uncompetitive salaries and it's all in-office
The one I know of (IMC trading) does a lot of low level stuff like this and is currently hiring.
This doesn't quite read properly to me. What does it actually mean, does anyone know?
Facebook's coding AIs to the rescue, maybe? I wonder how good all these "agentic" AIs are at dreaded refactoring jobs like these.
There are not many engineers capable of working on memory allocators, so adding more burden by agentic stuff is unlikely to produce anything of value.
No.
This is something you shouldn't allow coding agents anywhere near, unless you have expert-level understanding required to maintain the project like the previous authors have done without an AI for years.
I've done some work in this sort of area before, though not literally on a malloc. Yes you very much want to be careful, but ultimately it's the tests that give you confidence. Pound the heck out of it in multithreaded contexts and test for consistency.
I don't think so.
Even on LLM generated code, it is still not enough and you cannot trust it. They can pass the tests and still cause a regression and the code will look seemingly correct, for example in this case study [0].
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
https://technology.blog.gov.uk/2015/12/11/using-jemalloc-to-...
when i preloaded jemalloc , memory remained at significantly lower levels, and - more importantly - it was stable.
there seems to be no single correct solution to memory allocation, depending on the workload
Initially the idea was diagnostics, instead the the problem disappeared on its own.
Second thoughts: Actually the fb.com post is more transparent than I'd have predicted. Not bad at all. Of course it helps that they're delivering good news!
He's doing just fine. If you're looking for a story about a FAANG company not paying engineers well for their work, this isn't it.
"To help personalize content, tailor and measure ads and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls:
They should have just called it an ivory tower, as that's what they're building whenever they're not busy destroying democracy with OS Backdoor lobbyism or Cambridge Analytica shenanigans.
Edit: If every thread about any of Elon Musk's companies can contain at least 10 comments talking about Elon's purported crimes against humanity, threads about Zuckerberg's companies can contain at least 1 comment. Without reminders like this, stories like last week's might as well remain non-consequential.
From the Department of Redundancy Department.
I was recently debugging an app double-free segfault on my android 13 samsung galaxy A51 phone, and the internal stack trace pointed to jemalloc function calls (je_free).
> This branch is 71 commits ahead of and 70 commits behind jemalloc/jemalloc:dev.
It looks like both have been independently updated.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
A 0.5% improvement may not be a lot to you, but at hyperscaler scale it's well worth staffing a team to work on it, with the added benefit of having people on hand that can investigate subtle bugs and pathological perf behaviors.
but as usual there is an xkcd for that. https://xkcd.com/1205/
One project I spent a bunch of time optimizing the write path of I/O. It was just using standard fwrite. But by staging items correctly it was an easy 10x speed win. Those optimizations sometimes stack up and count big. But it also had a few edges on it, so use with care.
During my time at Facebook, I maintained a bunch of kernel patches to improve jemalloc purging mechanisms. It wasn't popular in the kernel or the security community, but it was more efficient on benchmarks for sure.
Many programs run multiple threads, allocate in one and free in the other. Jemalloc's primary mechanism used to be: madvise the page back to the kernel and then have it allocate it in another thread's pool.
One problem: this involves zero'ing memory, which has an impact on cache locality and over all app performance. It's completely unnecessary if the page is being recirculated within the same security domain.
The problem was getting everyone to agree on what that security domain is, even if the mechanism was opt-in.
https://marc.info/?l=linux-kernel&m=132691299630179&w=2
We did extensive benchmarking of HHVM with and without your patches, and they were proven to make no statistically significant difference in high level metrics. So we dropped them out of the kernel, and they never went back in.
I don't doubt for a second you can come up with specific counterexamples and microbenchnarks which show benefit. But you were unable to show an advantage at the system level when challenged on it, and that's what matters.
By the time you joined and benchmarked these systems, the continuous rolling deployment had taken over. If you're restarting the server every few hours, of course the memory fragmentation isn't much of an issue.
> But you were unable to show an advantage at the system level when challenged on it, and that's what matters.
You mean 5 years after I stopped working on the kernel and the underlying system had changed?
I don't recall ever talking to you on the matter.
Nope, I started in 2014.
> I don't recall ever talking to you on the matter.
I recall. You refused to believe the benchmark results and made me repeat the test, then stopped replying after I did :)
For the peanut gallery: this is a manifestation of an internal eng culture at fb that I wasn't particularly fond of. Celebrating that "I killed X" and partying about it.
You didn't reply to the main point: did you benchmark a server that was running several days at a time? Reasonable people can disagree about whether this a good deployment strategy or not. I tend to believe that there are many places which want to deploy servers and run for months if not days.
The "servers are only on for a few hours" thing was like never true so I have no idea where that claim is coming from. The web performance test took more than a few hours to run alone and we had way more aggressive soaks for other workloads.
My recollection was that "write zeroes" just became a cheaper operation between '12 and '14.
A fun fact to distract from the awkwardness: a lot of the kernel work done in the early days was exceedingly scrappy. The port mapping stuff for memcached UDP before SO_REUSEPORT for example. FB binaries couldn't even run on vanilla linux a lot of the time. Over the next several years we put a TON of effort in getting as close to mainline as possible and now Meta is one of the biggest drivers of Linux development.
If the allocator returns a page to the kernel and then immediately asks back for one, it's not doing its job well: the main purpose of the allocator is to cache allocations from the kernel. Those patches are pre-decay, pre-background purging thread; these changes significantly improve how jemalloc holds on to memory that might be needed soon. Instead, the zeroing out patches optimize for the pathological behavior.
Also, the kernel has since exposed better ways to optimize memory reclamation, like MADV_FREE, which is a "lazy reclaim": the page stays mapped to the process until the kernel actually need it, so if we use it again before that happens, the whole unmapping/mapping is avoided, which saves not only the zeroing cost, but also the TLB shootdown and other costs. And without changing any security boundary. jemalloc can take advantage of this by enabling "muzzy decay".
However, the drawback is that system-level memory accounting becomes even more fuzzy.
(hi Alex!)
Haswell (2013) doubled the store throughput to 32 bytes/cycle per core, and Sandy Bridge (2011) doubled the load throughput to the same, but the dataset being operated at FB is most likely much larger than what L1+L2+L3 can fit so I am wondering how much effect the vectorization engine might have had since bulk-zeroing operation for large datasets is anyways going to be bottlenecked by the single core memory bandwidth, which at the time was ~20GB/s.
Perhaps the operation became cheaper simply because of moving to another CPU uarch with higher clock and larger memory bandwidth rather than the vectorization.
People got promoted for continuous deployment
https://engineering.fb.com/2017/08/31/web/rapid-release-at-m...
I think it's fair to say the hardware changed, the deployment strategy changed and the patches were no longer relevant, so we stopped applying them.
When I showed up, there were 100+ patches on top of a 2009 kernel tree. I reduced the size to about 10 or so critical patches, rebased them at a 6 months cadence over 2-3 years. Upstreamed a few.
Didn't go around saying those old patches were bad ideas and I got rid of them. How you say it matters.
You reduced the number of patches a lot and also pushed very hard to get us to 3.0 after we sat on 2.6.38 ~forever. Which was very appreciated, btw. We built the whole plan going forward based on this work.
I'm not arguing that anyone should be nice to anyone or not (it's a waste of breath when it comes to Linux). I'm just saying that the benchmarking was thorough and that contemporary 2014 hardware could zero pages fast.
At what point did you realize how different fb engineering was from what you expected?
An important nuance - most Facebook engineers don't believe that Facebook/Meta would continue to grow next year; and that disbelief had been there since as early as in 2018 (when I'd joined).
very few facebook employees use their products outside of testing, which is a big contributor to that fear - they just can't believe that there are billions of people who would continue to use apps to post what they had for lunch!
And as a result of that lack of faith, most of them believe that Meta is a bubble and can burst at any point. Consequently, everyone works for the next performance review cycle, and most are just in rush to capture as much money as they could before that bubble bursts.
Huh.
The time I worked at a hyper growth company, us working in the coal mine had much the same skepticism. Our growth rate seemed ridiculous, surely we're over building, how much longer can this last?!
Happily, the marketing research team regularly presented stuff to our department. They explained who are customers were, projected market sizes (regionally, internationally), projected growth rates, competitive analysis (incumbents and upstarts), etc.
It helped so much. And although their forecasts seemed unbelievable, we over performed every year-over-year. Such that you sort of start to trust the (serious) marketing research types.
This is not special to Meta in any way, I observed it in any team which has more than 1 strong senior engineer.
For what it's worth, 20 years ago all programming newsgroups were like this. I grew my thick skin on alt.lang.perl lol
Of course technical discussions happen all the time at companies between competent people. But you don't do that in public, nor is this a technical debate: "I don't recall talking to you about it" - "I do, I did xyz then you ignored me" - "<changes subject>"
Always good to talk face to face if you're have strong feelings about something. When I said "talk" I meant literally face to face.
Spending a decade or so on lkml, everyone develops a thick skin. But mix it with the corporate environment, Facebook 2011, being an ex-employee adds more to the drama.
Having read through the comments here, I'm still of the opinion that any HW changes had a secondary effect and the primary contributor was a change in how HHVM/jemalloc interacted with MADV.
One more suggestion: evaluate more than one app and company wide profiling data to make such decisions.
One of the challenges in doing so is the large contingent of people who don't have an understanding of CPU uarch/counters and yet have a negative opinion of their usefulness to make decisions like this.
So the only tool you have left with is to run large scale rack level tests in a close to prod env, which has its own set of problems and benefits.
It is their loss, I cannot imagine letting a minor work quarrel live rent free in my head for over a decade. I feel bad enough when something is stuck in my mind for a week.
On a more serious note, it seems like any hyper competitive company eventually spirals into an awful, toxic working env.
This thread would've been way more fun with a couple of middle managers and product managers in the mix ;-)
If you don't like the idea of memory cgroups as a security domain, you could tighten it to be a process. But kernel developers have been opposed to tracking pages on a per address space basis for a long time. On the other hand memory cgroup tracking happens by construction.
> within a cgroup
Note the complementary language usage here. You seem to have interpreted that as me writing that it didn't matter what cgroup they are in, which is an odd thing to claim that I implied. I meant within the same cgroup obviously.
Yes, you can read memory out of another process through other means.. but you shouldn't map pages, be able to read them and see what happened in another process. That's the wild part. It strikes me as asking for problems.
I was unaware of MAP_UNINITIALIZED, support for which was disabled by default and for good reason. Seems like it was since removed.
The people deploying it are free to restrict the cgroup to one process before requesting MAP_UNINITIALIZED if there is a concern around security. At that point the memory cgroup becomes a way to get around the page tracking restriction.
But I get why aesthetically this idea sounds icky to a lot of people.
https://research.google/pubs/google-wide-profiling-a-continu... https://engineering.fb.com/2025/01/21/production-engineering...
The profiling clearly showed kernel functions doing memzero at the top of the profiles which motivated the change. The performance impact (A/B testing and measuring the throughput) also showed a benefit at the point the change was committed.
This was when "facebook" was a ~1GB ELF binary. https://en.wikipedia.org/wiki/HipHop_for_PHP
The change stopped being impactful sometime after 2013, when a JIT replaced the transpiler. I'm guessing likely before 2016 when continuous deployment came into play. But that was continuously deploying PHP code, not HHVM itself.
By the time the patches were reevaluated I was working on a Graph Database, which sounded a lot more interesting than going back to my old job function and defending a patch that may or may not be relevant.
I'm still working on one. Guilty as charged of carrying ideas in my head for 10+ years and acting on them later. Link in my profile.