I'm the author of an Elixir Agent Framework called Jido. We reached our 2.0 release this week, shipping a production-hardened framework to build, manage and run Agents on the BEAM.
Jido now supports a host of Agentic features, including:
- Tool Calling and Agent Skills - Comprehensive multi-agent support across distributed BEAM processes with Supervision - Multiple reasoning strategies including ReAct, Chain of Thought, Tree of Thought, and more - Advanced workflow capabilities - Durability through a robust Storage and Persistence layer - Agentic Memory - MCP and Sensors to interface with external services - Deep observability and debugging capabilities, including full stack OTel
I know Agent Frameworks can be considered a bit stale, but there hasn't been a major release of a framework on the BEAM. With a growing realization that the architecture of the BEAM is a good match for Agentic workloads, the time was right to make the announcement.
My background is enterprise engineering, distributed systems and Open Source. We've got a strong and growing community of builders committed to the Jido ecosystem. We're looking forward to what gets built on top of Jido!
Come build agents with us!
I’ve read a lot on HN about how BEAM execution model is perfect for AI. I think a crucial part that’s usually missing in LLM-focused libraries is the robustness story in the face of node failures, rolling deployments, etc. There’s a misconception about Elixir (demonstrated in one of the claw comments below) that it provides location transparency - it ain’t so. You can have the most robust OTP node, but if you commit to an agent inside a long running process, it will go down when the node does.
Having clear, pure agent state between every API call step goes a long way towards solving that - put it in Mnesia or Redis, pick up on another node when the original is decommissioned. Checkpointing is the solution
Jido core has zero LLM support for this reason.
There's nearing 40+ years of "Agent" research in CompSci, LLM's came along and we threw out all of it. I didn't like that so I spent time researching this history to do my best at considering it with Jido.
That said, I love LLM's - but they belong in the Jido AI package.
Speaking of inaccuracies, BEAM does provide pretty good location transparency - but resource migration between nodes in particular is not part of the built-in goodies that OTP brings
https://github.com/openai/symphony
I'm not very familiar with the space, I follow Elixir goings on more than some of the AI stuff.
It is curious... and refreshing... to see Elixir & the BEAM popping up for these sorts of orchestration type workloads.
https://web.archive.org/web/20260305161030/https://jido.run/
I just LLM-built an A2A package which is a GenServer-like abstraction. I however missed that there already was another A2A implementation for Elixir. Anyway, I decided to leave it up because the package semantics were different enough. Here it is if anyone is interested: https://github.com/actioncard/a2a-elixir
The future is going to be wild
Edit: for those not familiar with the BEAM ecosystem, observer shows all the running Erlang 'processes' (internal to the VM). Here are some examples screenshots on one of the first Google hits I found:
https://fly.io/docs/elixir/advanced-guides/connect-observer-...
Teaser screenshot is here: https://x.com/mikehostetler/status/2025970863237972319
Agents, when wrapped with an AgentRuntime, are typically a single GenServer process. There are some exceptions if you need a larger topology.
I was curious about the actual BEAM processes though, that you see via the observer application in Erlang/Elixir.
It's use-case specific though - security is a much bigger topic then just "agents in containers"
The point of Jido isn't to solve this directly - it's to give you the tools to solve it for your needs.
Congrats on the release!
This will be solved - and I hope that Jido can be a meaningful participant in that wider conversation.
I used Claude to learn & refine the patterns, but it couldn’t write this level of OTP code at that time.
As models got better, I used them to find bugs and simplify - but the bones are roughly the same from that original design.
Although... the agent orchestration is really the easy part. It is just a loop. You can solve this in many different ways and yes some languages are more suitable for this than others. But still - very straightforward.
The hard part is making sure these agents can do useful things which requires connecting them to tools. Although just adding bash might seem like checking that box the reality is more complex when it comes to authentication (not only). It is even more problematic when you need to run this in some sort of distributed way where you need to inject context midway, abort or pause and do so with all the constraints in mind like timing issues for minted urls and tokens, etc. Btw, adding messages to the context while LLM is doing some other job (which you might want to do for all kinds of reasons) does not always work because the system is not deterministic. So you need to solve this somehow.
Even harder is coming up with useful ways to apply the technology. The technical side of things can be solved with good engineering but most of the applications of these agents are around pretty basic use-cases and the adoption is sort of stagnated. 99% of these agents are question/answer bots, task/calendar organisers, something to do with spam and the most useful one is coding assistants.
And so frankly I think the framework is irrelevant at this point unless one figures out how to do useful things.
I came to similar conclusions - what does valuable agentic software look like? It's not OpenClaw (yet)
The game theory then, in my opinion, is to focus on the knowable frontier - implement tools we can trust - and continue working and sharing that work.
I am holding onto the optimistic case - valuable use cases beyond coding agents will emerge.
There’s a growing community showcase and I have a list of private/commercial references as well depending on your goals
(Probably complimentary but wanted to check)
https://hex.pm/packages/req_llm
ReqLLM is baked into the heart of Jido now - we don't support anything else
This agentic framework can co-exist with LangChain if that's what you're wondering.
As LLM API's evolved, I needed more and built ReqLLM which is now embedded deeply into Jido.
I am an amateur, can you point me in the correct direction to understand BEAM and use JIDO 2.0 to start building? Please.
Thanks, Jose
Sidian Sidekicks, Obsidian vault reviewer agents.
I think Jido will be prefect for us and will help us organize and streamline not just our agent interactions but make them more clear, what is happening and which agent is doing what.
And on top of that, I get excuse to include Elixir in this project.
Thanks for shipping.
Agree on operational boundaries - it took a long time to land where we did with the 2.0 release
Too much to say about this in a comment, but take a look at the "Concepts: Executor" section - it digs into the model here
Actions can enforce an output schema: https://hexdocs.pm/jido_action/schemas-validation.html#outpu...
Agents can as well - but it can be implemented a few different ways.
What's old is now rebranded, reheated and new again.
The tradeoff is heavy ML still lives in Python and on GPUs, so run models as external services over gRPC or HTTP or use Nx with EXLA for smaller on-node work, and if you need native speed use Rustler NIFs or ports but never block the BEAM scheduler or the node will grind to a halt.
Just a heads up, some of your code samples seem to be having an issue with entity escaping.