I have really come to appreciate modern opinionated tooling like gofmt, that does not come with hundreds to thousands of knobs.
[1]: https://github.com/openjdk/jdk/blob/master/src/hotspot/share...
In a way you can use this list of JVM options to illustrate how successful Java has become, that everyone needs an option to get it to work how they like it.
As a Java dev, I have maybe used about 10-15 of them in my career.
The weirdest/funnest one I used was for an old Sun Microsystems Solaris server which ran iPlanet, for a Java EE service.
Since this shared resources with some other back of office systems, it was prone to run out of memory.
Luckily there was a JVM option to handle this!
-XX:OnOutOfMemoryError="<run command>"
It wasn't too important so we just used to trigger it to restart the whole machine, and it would come back to life. Sometimes we used to mess about and get it to send funny IRC messages like "Immah eaten all your bytez I ded now, please reboot me"
So do we really need multiple thousand? Having all of them also makes finding the few you actually need much more difficult.
Assuming that you don't need 99.9% of them (they should have sane defaults that you never have to change or even learn that they exist or what they are) until that super rare case when one will save your hide, I'd lean towards yes.
In other words, they might as well be an escape hatch of sorts, that goes untouched most of the time, but is there for a reason.
> Having all of them also makes finding the few you actually need much more difficult.
This is a good point! I'd expect the most commonly changed ones (e.g. memory allocation and thread pools) to be decently well documented, on the web and in LLM training data sets. Reading the raw docs will read like noise, however.
I suggest most people never touch almost any other options. (Flight recording and heap dumps being the exception).
Not JVM options, but these are often also good to tune:
-Djdk.virtualThreadScheduler.parallelism
-Djdk.virtualThreadScheduler.maxPoolSize
-Djava.util.concurrent.ForkJoinPool.common.parallelism
In my experience this often both saves memory and improves performance.In reality the number of options is significantly smaller than the 1843 you mentioned. The list contains boatloads of duplicates because they exist for multiple architectures. E.g. BackgroundCompilation is present on 8 lines on the OpenJDK 25 page: aarch64, arm, ppc, riscv, s390, x86 and twice more without an architecture.
While gofmt is “just” a formatting tool. The interesting part is that go code that doesn’t follow the go formatting standard is rejected by the go compiler. So not only does gofmt not have knobs, you can’t even fork it to add knobs, because the rest of the go ecosystem will outright reject code formatted in any other way.
It’s a rather extreme approach to opinionated tooling. But you can’t argue with the results, nobody writing go on any project ever worries about code formatting.
For example, it’s the compiler and not gofmt that dictates that you must write a curly brace only on the same line of an “if” statement. If you put it on the next line, you don’t have unformatted code - you have a syntax error.
However, the compiler doesn’t care if you have too much whitespace between tokens or if you write your slice like []int{1, 2,3,4}, but gofmt does.
We could say the rules of the compiler and gofmt don’t even overlap.
The JVM is like an operating system. A better comparison would be Linux kernel parameters: https://www.kernel.org/doc/html/latest/admin-guide/kernel-pa...
Nobody has ever tested all possible inputs to 64 bit multiplication either. You can sample from the space.
I.E. Instead of "<DOMAIN> TLS Handshake failed" it will be something like "ERROR: PKIX failed". So now I have to figure out that PKIX is referring to PKI and it would make too much sense to provide the domain that failed. Instead I have to play the guessing game.
I.E. ERROR: TLS handshake failed: <DOMAIN> certificate chain unverified
For what it’s worth, the rise of helpful error messages seems to be a relatively new phenomenon the last few years.
The difference really becomes apparent when trying to debug a customer's problem at 3am (IME).
Notice how I did not compare to C, but modern alternatives.
Zillions of options. Some important, some not
You can search for those that may concern you. Good old search or AI "search".
For example I recently did test the AOT compilation of Clojure (on top of the JVM) code using "Leyden". I used an abandoned Github project as a base but all the JVM parameters related to Leyden had changed names (!) and the procedure had to be adapted. I did it all (as a Dockerfile) in less than an hour with Sonnet 4.6 (complete with downloading/verifying the Leyden JVM, testing, taking notes about the project, testing on different machines, etc.).
These are not trivial calls to the "java" command: it involves a specific JVM and several JVM params that have to work fine together.
The goal was to load 80 000 Clojure/java classes (not my idea: the original project did that part) and see the results: 1.5 seconds to launch with the Leyden JVM (and correct params) vs 6 seconds for a regular launch (so a 75% gain). GraalVM is even faster but much more complicated/annoying to get right.
It can look overwhelming but I'd say all these parameters are there for a reason and you only need a few of them. But when you need them, you need them.
P.S: unrelated to TFA and as a bonus for the "Java is slow crowd":
time java -jar hello/hello.jar
Hello, World!
real 0m0.040s
And that's without any Leyden/GraalVM trick. For Clojure the "slow" startup times are due to each Clojure function being transformed into one Java .class each and there are many Clojure functions. Hence the test with 80 000 Clojure functions from the project I reused: https://github.com/jarppe/clojure-app-startup-time-test
(but it's not maintained, won't work as if with the latest Leyden JVM)I kinda feel the same way with C/C++ warnings. Different code bases decide if different warnings are errors. That was a mistake (IMHO).
The other thought I have scanning these options is how many are related to GC. I kinda think GC is a bit of a false economy. It's just hiding the complexity. I wonder if it would've been better to push GC to be pluggable rather than relying on a host of options, a bit like TCP congestion management. I mean there are /proc parameters for that in Linux, for example, but it's also segregated (eg using BRR).
At the end of the day, none of this really matters. As in, the JVM is mature and I think generally respected.
still in the works, but its here for those interested: Petrify: https://github.com/exabrial/petrify
For anyone intered, here's the app:
https://apps.apple.com/app/apple-store/id6475267297?pt=11914...
But here it is: JVM is a modern cathedral.
(I know many conflict and there is not a shell buffer long enough to handle all that)
Kidding aside, I actually said "ugh, seriously" when I saw that there were literally thousands of options. Is there a public program with more options?
This is why lots of engineers waste time fiddling with options to tune the JVM and still require hundreds of replicated micro-services to "scale" their backends and losing money on AWS and when they will never admit the issue is the technology they have chosen (Java) and why AWS loves their customers using inefficient and expensive technologies.
Even after that, both Go and Rust continue to run rings around the JVM no matter the combination of options.
Go's GC is absolutely awful and leads to nondeterministic pauses and catastrophic latency spikes, especially when the memory pressure and capacity is high. Throw the go GC against a 256GB heap, see how well it survives.
Technologies have strong and weak points. Go's strong points are small, targeted pieces of software and having 66% of a binary basically be if err != nil return err. Rust's strong points are that you get to have the symbol<():soup<_, |_| of { c++ }>> while not saying you're writing c++ and feeling really smug when you say that you only needed to use 5 Arc<Mutex<T>> and rewrote your entire software three times but at least it runs almost as fast as some shitty C that does fgets() in the middle of a hot loop. Java lets you spawn spring boot and instantiate a string through reflection because why not.
I promise you, I can write allocation heavy FizzBuzzEnterpriseFactoryFactories in Rust too.
Recently I had a python friend use the most balls to the wall python backend, he couldnt beleive java was faster, but the numbers werent lying. We did 1 billion iterations of adding a float, took a few seconds in java.
Development velocity is way greater in Java.
An interface like above to sort things would probably be quite helpful as well.
[0] https://peter.sh/experiments/chromium-command-line-switches/
Same as, say, ANTLR generates code to parse various texts to AST.