It's overkill for some of our problems but it's working fine! We make mistakes, but they're mistakes we'd have made with most other languages.
I did have to buy Pragstudio licenses for anyone using Elixir on the team. I'd prefer a few books, but most Elixir/Phoenix books don't seem like they're keeping up with the rate of change.
Getting feedback directly from production has been helpful too to tell us when we didn't think something through. We don't use branches, everything is a commit to main and every push is a production deployment so all three of us are in the loop on what each other is doing.
Didn't really buy any books but testing and trying things out in LiveBook has been huge to learn the nuances of the language. The LiveDashboard has been great in monitoring things, especially the PostGres plugin for it. The Discord community has been very supportive as well, and the Elixir Forums as well.
MIX_OS_DEPS_COMPILE_PARTITION_COUNT=1 mix deps.compile 32.30s user 7.23s system 320% cpu 12.336 total
MIX_OS_DEPS_COMPILE_PARTITION_COUNT=5 mix deps.compile 0.37s user 0.49s system 12% cpu 6.970 total
MIX_OS_DEPS_COMPILE_PARTITION_COUNT=10 mix deps.compile 0.38s user 0.50s system 12% cpu 7.236 total
Machine is a Mac M1 Max, `rm -rf _build` in between each run. rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=1 mix deps.compile
________________________________________________________
Executed in 37.75 secs fish external
usr time 103.65 secs 32.00 micros 103.65 secs
sys time 20.14 secs 999.00 micros 20.14 secs
rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=5 mix deps.compile
________________________________________________________
Executed in 16.71 secs fish external
usr time 2.39 secs 0.05 millis 2.39 secs
sys time 0.87 secs 1.01 millis 0.87 secs
rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=10 mix deps.compile
________________________________________________________
Executed in 17.19 secs fish external
usr time 2.41 secs 1.09 millis 2.40 secs
sys time 0.89 secs 0.04 millis 0.89 secs
mise use elixir@1.19-otp-26 erlang@26
rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=1 mix deps.compile
________________________________________________________
Executed in 97.93 secs fish external
usr time 149.37 secs 1.45 millis 149.37 secs
sys time 28.94 secs 1.11 millis 28.94 secs
rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=5 mix deps.compile
________________________________________________________
Executed in 42.19 secs fish external
usr time 2.48 secs 0.77 millis 2.48 secs
sys time 0.91 secs 1.21 millis 0.91 secs
I personally do not like type systems, and still code in JS, not TS. Any JS artifacts I produce are untyped. Yet even my Elixir-code is nearly type ready.
So while TS is fighting an uphill battle, I think Elixir is working downhill.
For me some serious elixir adventure is high up in my todo list. But I remain suspicious if I can ever fully enjoy myself with a dynamic language - I think gleam and elixir do cater to different crowds. Gleam is pure minimalism (just pattern matching really), but elixir doesn't seem bloated either.
I am so happy that both languages exist and give alternatives in times of hundreds of node deps for any basic slob webapp.
- A language (Elixir/Erlang) and runtime (BEAM) - The concurrency standard library (OTP)
The language and runtime give you the low level concurrency primitives - spawn a process, send a message etc. But to build actual applications, you rarely use those primitives directly - instead you use GenServers and supervisors. That gives yo a way to manage state, turns message passing into function calls, restarts things when they crash etc.
Gleam compiles to the same runtime, but doesn't provide OTP in the same way - static types aren't easy to make work with the freely message passing world of OTP. They are implementing a lot of the same concepts, but that's work that Gleam has to do.
(Elixir provides some thin API wrappers over the Erlang OTP APIs, and then also provides some additional capabilities like Tasks).
It's never been a wat-language in the style of JavaScript.
Granted, it's alpha software and it's currently embedded in the Hologram framework, but still.
I have a question about how the type inference works. Dialyzer, which also attempts to do type inference, uses "success" typing which means it will not flag something if it could work. It tries to minimize false positives. In practice, this means it hardly ever catches anything for me (and even when it does warn about something, it's usually not real anyway!), so I don't find it that useful.
Does this approach use "success" typing as well? I found the `String.Chars` protocol example interesting, since I've had my fair share of crashes from bad string interpolations. But in the example, it's _clearly_ wrong, and will fail every time. That's not that useful to me because any time that code is exercised (e.g. in a simple test) it would be caught. What's more useful is if some particular code path results in trying to interpolate something wrong.
I know under the hood, the Elixir type system has something to do with unions of possible values, so it is tracking all the things that "could" be passed to a function. Will it warn only if all of them fail, or if any of them fail?
Having the $lang_ecosystem address this sounds godsent. Unfortunately we don't use Elixir at $work.
dynamic() > boolean()
integer() > boolean()
I was a fan of Ruby -- because of it's pragmatism and subjective beauty -- but then I got into type systems.
Elixir now also has a type system and, so does Ruby...
Though I know program in Kotlin, which syntax-wise is very much a "typed Ruby".
My app is a mix of real time and rest endpoints, and there’s no heavy computation and even if there was I could just call do that one off in Go or something.
Would Phoenix make sense for me? I have some cool collaborative features in the works.
The language itself is maybe OK but the overall experience is not.
On a production build, stack traces look like Erlang code, which is the weird syntax that Elixir tried to improve upon.
Then you have macros, which make code unmaintainable at the 10k SLOC mark, and increasingly harder to maintain as projects get larger.
Running "mix xref graph" on most Elixir projects shows a spaghetti mess.
The toolchain has much room for improvement. Editing, debugging, profiling, unit testing, or pretty much any basic routine development task, involves a tool that's decades behind the state of the art. Even Borland tools from the 80s have a better toolchain.
Building a team around Elixir is hard. You have to train people on the job and they will probably not write idiomatic code that takes advantage of the language. Or deal with people that won't stop selling you how great the language is.
And the documentation for most of the projects you will use is full of noise, with few workable examples, grandiose claims of performance and fantastic treasures, and the articles are a great read if you want to waste your entire evening.
Support for massive concurrency is nice but you are realistically not going to need it. If you do need it then yes, Elixir can be a good tool for the job.
ExUnit has been hands down the most impressive testing library I've ever worked with, and the debugging, profiling, analytics, introspection, observability, etc of the BEAM is unbeatable.
Documentation of elixir, elixir deps, and elixir code is also far and above any language I've ever seen.
And the struggles I had supporting minimal concurrency in python were completely alleviated - so even if you don't need massive concurrency, elixir has a good chance of massively simplifying anything that needs minimal concurrency (which is probably most web related projects).
I have a lot of respect for the community behind it but the experience is still not there.
I complain about OCaml docs all the time. But Elixir? no way.
> On a production build, stack traces look like Erlang code...
Elixir has the most readable stacktraces of any language I've used. Here's an example (which is color-coded in the terminal for even more clarity and, as you can see, doesn't contain any Erlang code):
== Compilation error in file lib/app_web/live/authentication/settings.ex ==
** (MismatchedDelimiterError) mismatched delimiter found on lib/app_web/live/authentication/settings.ex:96:1:
error: unexpected reserved word: end
│
4 │ on_mount {AppWeb.UserAuth, :require_sudo_mode
│ └ unclosed delimiter
...
96 │ end
│ └ mismatched closing delimiter (expected "}")
│
└─ lib/app_web/live/authentication/settings.ex:96:1
It's easy to see that the issue is on line 4, and that it is a missing curly brace.> Then you have macros, which make code unmaintainable...
Elixir gives you full access to the AST, making macros extremely easy to read and reason through. The point of the Lisp-style macros Elixir uses is to simplify your code. If your code becomes unmaintainable due to your use of macros, you're probably misusing them. I'd have to see a sample to make that determination, though.
> Running "mix xref graph" on most Elixir projects shows a spaghetti mess.
Spaghetti? It's a simple two-level tree that is in alphabetical order. Here's an example, and it is like this all the way down:
Compiling 26 files (.ex)
Generated stow app
lib/app.ex
lib/app/accounts.ex
├── lib/app/accounts/user.ex (export)
├── lib/app/accounts/user_notifier.ex
├── lib/app/accounts/user_token.ex (export)
└── lib/app/repo.ex
...
To me, this output seems extremely accessible.> The toolchain has much room for improvement...
Having developed many Windows apps using Borland's tools in the 80s and 90s, I disagree with this statement for these reasons:
• Mix is one of the best and most integrated build tools/task runners I've used. For example, you can create, migrate, and reset databases, execute tests, lint code, generate projects, compile, build assets, install packages, pull dependencies, etc.
• ExUnit is a great testing framework that handles all kinds of tests in an easy-to-read DSL similar to Ruby's RSpec.
• IEx is a fantastic REPL.
• Elixir's debugging tools are excellent. For example, IEx.pry lets you stop all processes and interact with your system in that frozen state in the REPL. You can watch variables, run functions, and even create new functions on the fly to interact with your data to see how it behaves in different scenarios.
> Building a team around Elixir is hard.
Why is it hard? I've worked exclusively on Elixir projects for both start-ups and large companies with hundreds of engineers for over ten years now, and never had a problem with hiring teams.
> And the documentation for most of the projects you will use is full of noise, with few workable examples, grandiose claims of performance, and fantastic treasures, and the articles are a great read if you want to waste your entire evening.
All technical documentation could be improved, but Elixir's is already quite good. See for yourself: https://hexdocs.pm/elixir/1.19.0/Kernel.html.
Furthermore, from within IEx, you can type:
function |> h
for any function and see an explanation of what the function does and examples of how to use it.> Support for massive concurrency is nice, but you are realistically not going to need it...
Elixir supports minute concurrency as well, and yes, I do need it. For example, in Ruby on Rails, which has a GIL, I'd have to use a gem like Sidekiq to push long-running processes into Redis so they can be processed in the background. In Elixir, I can just run them in a separate, concurrent process, which is simple.
Here's an example that takes a collection of users and a function and then runs each user through that function, each in a separate thread:
defmodule ParallelProcessing do
def map(collection_of_users, func) do
collection_of_users
|> Enum.map(&(Task.async(fn -> func.(&1) end)))
|> Enum.map(&Task.await/1)
end
end
Here's the same Elixir code running in a single thread. defmodule SequentialProcessing do
def map(collection_of_users, func) do
collection_of_users
|> Enum.map(func)
end
end
In the first example, I could have 1 user, 1,000 users, or 1,000,000 users, and this code would run as optimally as possible on all the cores in my CPU or across all the cores in all the CPUs in my multi-server BEAM cluster. There are no extra programs or libraries needed. In the second example, users are processed one at a time, similar to languages like Python, Ruby, JavaScript, PHP, and Perl.Given the simplicity of writing parallel code in Elixir, why would I limit myself to one CPU core to perform a task when I can use all cores simultaneously?
> Or deal with people that won't stop selling you how great the language is.
The reason they won't stop selling you on Elixir is that Elixir is a fantastic language. I hope you take the time to revisit it in the future. It really is much better than most things out there.
Do you have an example? There are some cases that I can think of where the application dumps some foreign-looking data structures if the release fails to start, but that's very rare and usually the actual error is somewhere near the beginning like "eaddrinuse" here:
[notice] Application my_app exited: MyApp.Application.start(:normal, []) returned an error: shutdown: failed to start child: MyAppWeb.Endpoint
** (EXIT) shutdown: failed to start child: {MyAppWeb.Endpoint, :http}
** (EXIT) shutdown: failed to start child: :listener
** (EXIT) :eaddrinuse
Kernel pid terminated (application_controller) ("{application_start_failure,my_app,{{shutdown,{failed_to_start_child,'Elixir.MyAppWeb.Endpoint',{shutdown,{failed_to_start_child,{'Elixir.MyAppWeb.Endpoint',http},{shutdown,{failed_to_start_child,listener,eaddrinuse}}}}}},{'Elixir.MyApp.Application',start,[normal,[]]}}}")
Here's how runtime errors are normally reported (in `MIX_ENV=prod mix release` build): 10:47:17.229 [error] GenServer {MyApp.Registry, "some-long-running-thing:4196f8ae-c971-439b-854e-5057e45076b9", %{}} terminating
** (RuntimeError) attempted to call GenServer #PID<0.2892.0> but no handle_call/3 clause was provided
(my_app 1.1.0) /home/runner/work/elixir/elixir/lib/elixir/lib/gen_server.ex:895: MyApp.Monitoring.SomeServerMonitor.handle_call/3
(stdlib 5.2.3.5) gen_server.erl:1131: :gen_server.try_handle_call/4
(stdlib 5.2.3.5) gen_server.erl:1160: :gen_server.handle_msg/6
(stdlib 5.2.3.5) proc_lib.erl:241: :proc_lib.init_p_do_apply/3
Last message (from #PID<0.2891.0>): {:some_unknown_request, %MyApp.Monitoring.Stats{ts: ~U[2025-10-17 08:47:17.229542Z], some_data: 2}}
> Then you have macros, which make code unmaintainable at the 10k SLOC mark, and increasingly harder to maintain as projects get larger.Absolutely, so don't write macro-heavy code. This is mentioned in the first paragraph of official Macro documentation and documented as an anti-pattern in the official documentation.
> The toolchain has much room for improvement.
I agree that editing experience (due to lacklustre language server support which is now being worked on officially), and debugging tools are lagging behind.
> And the documentation for most of the projects you will use is full of noise, with few workable examples, grandiose claims of performance and fantastic treasures, and the articles are a great read if you want to waste your entire evening.
I don't agree with this at all.
.ex compiles to beam files to be run later
.exs compiles to memory
You don’t need to know Erlang to use Elixir; I’m a few years in now and I’ve never had to write any Erlang.
I can't speak too much about Python – but immutable vars is a core prerequisite for many of the features OTP (the platform underpinning Elixir (and Erlang)).
1. It's dynamic 2. It's compiled 3. Elixir script is just a file with Elixir code that compiles and runs right away 4. I've been writing Elixir for 7 years and barely know any Erlang. I even submitted bugs to OTP team with reproductions in Elixir, they're chill. 5. Preemptive scheduler, immutable data
Elixir and Erlang are dynamic compiled languages.
The actor model being built in to the runtime offers many benefits, all of which I cannot enumerate here, but prominent among them are the ability for the VM to preemptively schedule actors, and the fact that actors are independent in memory and cannot mess with the internal state of other actors.
The jump from Elixir to Erlang or vice versa is small.
The hardest (and most rewarding) part is learning OTP and the whole BEAM runtime system, which you can do with either language.
Erlang and Elixir are slightly different syntaxes for the same semantics, and if you know one you can learn to read and probably write the other in less than a day.
It isn’t like Clojure and Java where Clojure is significantly higher level than Java in many ways. Elixir adds a few things to Erlang but is otherwise the same programming model.
> I just don't know which niche applications Elixir targets and excels at.
Pretty much any application where concurrent IO and state management are the main problems. Web applications, proxies/brokers, long running network stuff, semi-embedded low power devices that are hard to physically access and must remain reliable for years at a time, that kind of thing.
I actually agree with you that the ecosystem and tooling and especially the value proposition are confusing, and the sales pitch over the years has often been poor.
The whole BEAM community would do well to speak more plainly about the concrete benefits for programmers and companies, and the existing successful applications rather than the theoretical beauty of the syntax or the actor model.
I hope this helps.
A high-level language with a strict shared-nothing concurrency model doesn't need a GIL... but you naturally can't practically share very large objects between BEAM processes.
1. Regarding Python's GIL: The issue isn't memory sharing between threads. Java and Go allow you to do this, too, but they don't have GILs. The reason Python has a GIL is that it uses reference counting for memory management. If it didn't have a GIL, multiple threads could simultaneously manipulate reference counts, which would lead to memory corruption/leaks.
2. You can share massive "objects" between BEAM processes. For example, if you're running BEAM in a 64-bit environment, you can share maps, structs, and other data structures that are up to 2,305,843,009,213,693,951 bytes in size.
I hope this information helps. I also hope it is correct. I think it is, but I've been wrong before.
CPython used to have a GIL, it is no longer the case since the latest version, 3.14.
Other Pythons, jPython, GrallPy, PyPy, never had a GIL.
Languages and implementations aren't the same.
So many examples of programming languages have huge breaking changes between versions that end up creating a split in the ecosystem that takes years to resolve.
Thankfully José has been very clear about Elixir being done since at least 2018. The language is stable and the language/core foundation is not changing anymore.
https://www.youtube.com/watch?v=suOzNeMJXl0
Truly outsanding work and stewardship.
I can only think of 2: python 3 and perl 6.
Those two were very traumatic so it's not surprising it feels like more.
While C#, F#, VB and C++/CLI were kept compatible, it doesn't help when the library stuff you want to call isnt' there any longer.
C++ removal of exception specifiers, GC API,
C VLAs got dropped in C11, function prototypes changed meaning in C23, and K&R declarations were dropped from the standard.
Java, already someone else mentioned.
D, the whole D1 => D2 transition, and Tango vs Phobos drama.
Bit of a quibble but I'm not sure I'd call that a "huge breaking change" given that that feature wasn't really implemented in the first place, let alone actually used.
https://cppreference.com/w/cpp/compiler_support/11.html
It was a bad feature, as the two main C++ commercial products that make use of GC, namely C++/CLI and Unreal C++, were never taken into account while designing it, a good example how WG21 often does PDF driven design.
Not so sure I'd call these huge breaking changes. They're breaking, sure, but I'd expect them to be trivial to fix in any existing codebase.
Maybe VLAs are a huge breaking change? Most code never used it due to no way at all to use them safely, so while it is a pain to replace all occurrences of them, the number of usages should be really low.
https://www.phoronix.com/news/Linux-Kills-The-VLA
Breaking changes are breaking changes, even if it is only fixing a comma, someone has to spend part of their time making it compile again, which many times maps to actual money for people working at a company, their salary mapped into hours.
No disagreement there, but the context ITT was specifically about huge breaking changes. I consider those breaking changes, but not necessarily huge ones.
Full of Least Surprise violations, and just far too goddamned big. Did 3 try to pare that back into something reasonable?
See all platforms that have their identity tied with a specific language, the platform's language always has a guaranteed future as long as the platform continues to be industry relevant.
The others on top, come and go.
It’s certainly a case that languages need to be championed by competent IDE writers otherwise they fail to scale. Because you can’t have 50 devs all using neovim - and only neovim - without making a huge gigantic mess. Large projects can sustain a few brilliant people working with one hand tied behind their back but not everyone.
The issue for me is that Scala design lacks focus. They say yes to too many features.
On paper, it really was just a few changes. In practice, it forced a massive transitive dependency and technical debt cleanup for many companies.
Also breaking changes do happen, see list of removed methods
https://docs.oracle.com/en/java/javase/17/migrate/removed-ap...
Typescript has introduced breaking changes but they're not that bad
There was a rails upgrade around that time that was similarly painful, at least in the humongous rails app I was working in.
This caused quite a lot of work on the apps I worked on.
Ruby 1.8 to 1.9 was a major version change in the semver sense; Ruby wasn't using Semver before, IIRC, 2.1.0, it was using a scheme that was basically loosely like Semver with an extra prefix number. Ruby minor versions were equivalent semver major (and also had a less-stable implication for odd numbers, more stable for even, Ruby “tiny” versions were equivalent to semver minor, and Ruby still had patch versions.
I usually find the Erlang/OTP upgrades to be a bit more problematic compatibility-wise.
So I’m often in the latest elixir but one Erlang/OTP version behind cuz I wait a few months for all the kinks to be worked out.
Python 3 was really, really needed to fix things in 2. Hence 2 became 3. They managed it pretty well, vaguely similar to Go, with automated update tools and compatibility-ish layers. It had its speed bumps and breakages as not everything went smoothly.
OTOH: Ruby 3 went the wrong way with types separate files and fragmentation of tools. And that's not mention having to opt-in with boilerplate to change how String literals work. Or: gem signing exists but is optional, not centrally-managed, and little-used. Or: Ruby Central people effectively stole some gems because Shopify said so. PS: many years ago Hiroshi Shibata blocked me from all GH Ruby contributions for asking a clarifying question in an issue for no reason. It seemed agro, unwarranted, and abrupt. So the rubygems repository fragment drama seems like the natural conclusion of unchecked power abuse lacking decorum and fairness, so I don't bother with Ruby much these days because Rust, TS, and more exist. When any individual or group believe they're better than everyone else, conflict is almost certainly inevitable. No matter how "good" a platform is, bad governance with unchecked conduct will torpedo it. PSA: Seek curious, cooperative, and professional folks with mature conflict-resolution skills.
It's a good idea™ to think deeply and carefully and experiment with language tool design in the real world before inflicting permanent, terrible choices rather than net better but temporarily-painful ones. PSA: Please be honest, thoughtful, clear, and communicate changes in advance so they can be avoided or minimized to inflict least net pain for all users for all time.
Honestly, I hope more development goes into making Phoenix/Elixir/OTP easier, more complete, more expressive, more productive, more testable, and more performant to the point that it's a safe and usable choice for students, hobbyists, startups, megacorps, and anyone else doing web, non-web, big data, and/or AI stuff.
Plug for https://livebook.dev, an app that brings Elixir workbooks to a desktop near you. And https://exercism.org/tracks/elixir
> Honestly, I hope more development goes into making Phoenix/Elixir/OTP easier, more complete, more expressive, more productive, more testable, and more performant to the point that it's a safe and usable choice for students, hobbyists, startups, megacorps, and anyone else doing web, non-web, big data, and/or AI stuff.
Seriously, this has been the case all the time. It's a great fit for AI, web (Phoenix), non-web (Nerves), students (Pragstudio), hobbyists (hi), megacorps (Discord, bleachereport).
What do you mean it's not testable, productive, expressive enough? Do you mean the entire elixir community is just fiddling about with unsafe software?
This comment seems just like a giant ragebait.
I don't know how you can say this honestly - it was turbulent and fraught with trouble and angst. It most certainly was NOT handled well.
Although, what parts of Elixir itself are rough or missing creature comforts? I generally feel it's stable and fine, but I admittedly haven't written Elixir code in a couple of years, sadly.
The idea that Phoenix is also mostly macros does not hold in practice. Last time this came up, I believe less than 5% of Phoenix' public API turned out to be macros. You get this impression because the initial skeleton it generates has the endpoint and the router, which are macro heavy, but once you start writing the actual application logic, your context, your controllers, and templates are all regular functions.
no, but the Framework does push you into using them. A good example is the `use MyAppWeb` pattern. That's a macro that nests other macros. the good news is that you can pretty much excise that and everything works fine, and LLMs have no problem even! (i think they slightly prefer it)
a few cognitive pain points with phoenix macros:
plug: (love it dearly) but a bit confusing that it creates a conn variable out of whole cloth. a minor complaint. worth it, otherwise.
phoenix.router: is a plug but isnt quite a plug.
anyways that's it! the rest is ~fabulous. i think to find a framework where you have two minor complaints is a blessing. remember how activerecord automagically pluralized tables for you?
Conn is just a pipeline of functions, the initial Conn struct is created at request time and passed through to each function in the pipeline.
> I believe less than 5% of Phoenix' public API turned out to be macros.
The idea may still be right, but I'm curious if that addresses the majority of the public API that users are greeted with. I have unfortunately not written Elixir in a few years (cries), and I've never fully grokked Phoenix, so perhaps I'm still wrong.
I mean this describes every full stack web framework right? Like sure if the underlying language doesn't have macros or macro-like tools that limits how perverted the syntax can get but the line between "DSL" and "API" gets really blurry in all of these massive frameworks.
Wherever rails or phoenix has macro-defined syntax to handle a specific task, laravel or whatever will have a collection of related functions that need to be used in very specific ways to accomplish the same thing. Whether this collection is a "class" with an "api" or whether it is a "language" defined around this "domain" you will have the abstraction and the complexity.
Having a preference for one approach of managing this abstraction & complexity seems fine but "a collection of DSLs" is pretty much what a web framework is so that can't be the problem here.
With macros, even language servers may need customization if they introduce new syntax. The code that runs doesn't exist until it runs, so you can't see it ahead of time.
This doesn't sound like too big a problem if you're familiar with the tooling already, but trying to figure out where some random method comes from in a rails code base when you're new to Ruby is somewhere between a nightmare and impossible without debugging and using the repl to tell you where the source is.
React has a JSX macro, and I love using it, so there's definitely room for them. There is a world of difference in developer experience when macros are used versus when not, however, and it is wrong to say that it is all the same.
It's kind of the standard way to paper over the protocol grit of HTTP and make people able to quickly pump out fresh plumbing between outbound socket and database.
You mean in the sense that the language's built-in syntax and available abstractions get abused so much that it approximates a DSL?