Show HN: MOL – A programming language where pipelines trace themselves
38 points by MouneshK 5 days ago | 16 comments
Hi HN,

I built MOL, a domain-specific language for AI pipelines. The main idea: the pipe operator |> automatically generates execution traces — showing timing, types, and data at each step. No logging, no print debugging.

Example:

    let index be doc |> chunk(512) |> embed("model-v1") |> store("kb")
This auto-prints a trace table with each step's execution time and output type. Elixir and F# have |> but neither auto-traces.

Other features: - 12 built-in domain types (Document, Chunk, Embedding, VectorStore, Thought, Memory, Node) - Guard assertions: `guard answer.confidence > 0.5 : "Too low"` - 90+ stdlib functions - Transpiles to Python and JavaScript - LALR parser using Lark

The interpreter is written in Python (~3,500 lines). 68 tests passing. On PyPI: `pip install mol-lang`.

Online playground (no install needed): http://135.235.138.217:8000

We're building this as part of IntraMind, a cognitive computing platform at CruxLabx. """


nivertech 4 days ago
> Elixir and F# have |> but neither auto-traces.

Using dbg/2 [1]:

  # In dbg_pipes.exs
  __ENV__.file
  |> String.split("/", trim: true)
  |> List.last()
  |> File.exists?()
  |> dbg()
This code prints:

  [dbg_pipes.exs:5: (file)]
  __ENV__.file #=> "/home/myuser/dbg_pipes.exs"
  |> String.split("/", trim: true) #=> ["home", "myuser", "dbg_pipes.exs"]
  |> List.last() #=> "dbg_pipes.exs"
  |> File.exists?() #=> true
---

1. Debugging - dbg/2

https://hexdocs.pm/elixir/debugging.html#dbg-2

reply
anonzzzies 24 hours ago
I should have bet more on Elixir. I did work in F# but MS really didn't seem serious enough about it, but the Elixer community keeps going strong.
reply
wavemode 22 hours ago
> The Killer Feature: |> with Auto-Tracing. No other language has this combination

Of the languages listed, Elixir, Python and Rust can all achieve this combination. Elixir has a pipe operator built-in, and Python and Rust have operator overloading, so you could overload the bitwise | operator (or any other operator you want) to act as a pipeline operator. And Rust and Elixir have macros, and Python has decorators, which can be used to automatically add logging/tracing to functions.

It's not automatic for all functions, though having to be explicit/selective about what is logged/traced is generally considered a good thing. It's rare that real-world software wants to log/trace literally everything, since it's not only costly (and slow) but also a PII risk.

reply
tyushk 22 hours ago
In Rust, wouldn't implementing BitOr for Fn/FnOnce/FnMut violate the orphan rule?
reply
wavemode 21 hours ago
I'm envisioning that in Rust (and Python), the operator overload would be on a class/struct. It would be the macro/decorator (the same one that adds logging) which would turn the function definition into an object that implements Fn.
reply
graemep 8 hours ago
I have done exactly that as an exercise in what you can do with Python: overload |, and a decorator that you can use to on any function to return an instance of a callable class that calls that function and overloads |.

Whether it is a good idea to use it is another matter (it does not feel Pythonic), but it is easy to implement.

reply
carterschonwald 9 hours ago
somehow this counts like model cot.
reply
bmitc 16 hours ago
Rust is really not built for pipelining. It is extremely cumbersome to do even moderately sized chains of maps, filters, etc.

Python's scoping and mutability make it an extremely poor language for pipelining.

reply
cpa 14 hours ago
Pretty cool to have a first-class tracing mechanism. Obviously... it's a monad! Haskell has had a MonadTrace monad for a long time, that can be switched on or off depending on your environment.

https://hackage.haskell.org/package/tracing-0.0.7.4/docs/Con...

reply
PunchyHamster 13 hours ago
haskell guys gonna call for loop a monad and then gush how amazing monads are
reply
bb88 22 hours ago
This strikes me as cool to see someone build another language with python using lark, it's also possible to override the ">>" or "|" characters in python to achieve the same thing, and also you don't have to worry about the "lark" grammar.

I had a custom lark grammar I thought was cool to do something similar, but after a while I just discarded it and went back to straight python, and found it was faster my an order of magnitude.

reply
vjerancrnjak 14 hours ago
Pipelines are often dynamic, how is this achieved?

Pipelines are just a description of computation, sometimes it makes sense to increase throughput, instead of low latency, by batching, is execution separate from the pipeline definition?

reply
yellowapple 13 hours ago
I like it. Seems like a nice combination of features. It's pitched at AI/ML usecases, which is understandable given the current hypetrain, but on first glance I think it can stand up well in a more general-purpose context.

Re: pipe tracing, half a decade or so ago I made a little language called OTPCL, which has user-definable pipeline operators; combined with the ability to redefine any command in a given interpreter state, it'd be straightforward for a user to shove something like (pardon the possibly-incorrect syntax; haven't touched Erlang in awhile)

    'CMD_|'(Args, State) ->
        io:print("something something log something something"),
        otpcl_core:'CMD_|'(Args, State).
into an Erlang module, and then by adding that to a custom interpreter state with otpcl:cmd/3 you end up with automatic logging every time a script uses a pipe.

Downside is that you'd have to do this for every command defining a pipe operator (i.e. every command with a name starting with "|"); alternate user-facing approach would be to get the AST from otpcl:parse/1, inject log/trace commands before or after every command, and pass the modified tree to otpcl:interpret/2 (alongside an interpreter state with those log/trace commands defined). Or do the logging outside of the interpreter between manual calls to otpcl:interpret/2 for each command; something like

    trace_and_interpret([], State) ->
        {ok, State};
    trace_and_interpret([Cmd|Tree], State) ->
        io:print("something something log something something"),
        {_, NewState} = otpcl:interpret([Cmd], State),
        trace_and_interpret(Tree, NewState).
should do the trick, covering all pipes and ordinary commands alike.
reply
nnnnico 21 hours ago
Cool project. Could You expand on what is the use case for something like it compares to e.g. a python library? Maybe an example of more complex workflows or open ended loops/agents that can showcase the pros of using such a language compared to other solutions. Are these pipelines durable for example or how do they execute?
reply
qrios 23 hours ago
Very interesting! I'll definitely give it a try. However, the documentation link[1] isn't working at the moment (404).

[1] https://crux-ecosystem.github.io/MOL/

reply
desireco42 20 hours ago
Kind of like Ruby... with pipes. Elixir has them, but this reminds me more like Ruby.
reply