Low-key hate the lack of capitalization on the blog, which made me stumble over every sentence start. Great blog post a bit marred by unnecessary divergence from standard written English.
Why do people do this? They capitalize names, so clearly their shift key works. Do they do it feel special or like some sort of rebel?
i see enough slop and Look At Me on a daily basis. i don't want it to look like an ad or a LinkedIn post in 2026.
If you want sentences without capitalization to be your thing, then go for it. It's just a weird hill to die on, taking away from the readability of your posts for no real reason.
It's the same thing with dark mode as default, i chose it because it's my own preference and i'd love it everywhere, but i'm constantly being flashbanged by phone apps because someone decided #FFFFFF is a good background color while the app is loading.
https://ieviev.github.io/resharp-webapp/
Back in the Usenet days, questions came up all the time about matching substrings that do not contain whatever. It’s technically possible without an explicit NOT operator because regular languages are closed under complement — along with union, intersection, Kleene star, etc. — but a bear to get right by hand for even simple cases.
Unbounded lookarounds without performance penalty at search time are an exciting feature too.
1) regex equivalence checker (check if intersection of complements is empty):
https://gruhn.github.io/regex-utils/equiv-checker.html
2) password generator from regex constraints (16+ chars, at least on upper case char, etc). Just take the intersection of all constraints and generate random matches from that:
[0]: my very charitable take, as MS obviously cares C# much much more than F#.
In any case, those of us that don't have issues with .NET, or Java (also cool to hate these days), get to play with F# and Scala, and feel no need to be amazed with Rust's type system inherited from ML languages.
It is yet another "Rust but with GC" that every couple of months pops up in some forums.
https://steve-yegge.blogspot.com/2010/12/haskell-researchers...
Lol
Not necessarily with bots, just posting a few links in a company Slack with the request for everyone to upvote it from their personal account could be enough.
- https://github.com/telekons/one-more-re-nightmare
- https://applied-langua.ge/posts/omrn-compiler.html
OMRN is a regex compiler that leverages Common Lisp's compiler to produce optimized assembly to match the given regex. It's incredibly fast. It does omit some features to achieve that, though.
OMRN: No lookaround, eager compilation, can output first match
RE#: No submatches, lazy compilation, must accumulate all matches
Both lookaround and submatch extraction are hard problems, but for practical purposes the lack of lazy compilation feels like it would be the most consequential, as it essentially disqualifies the engine from potentially adversarial REs (or I guess not with the state limit, but then it’s questionable if it actually counts as a full RE engine in such an application).
> how does RE# find the leftmost-longest match efficiently? remember the bidirectional scanning we mentioned earlier - run the DFA right to left to find all possible match starts, then run a reversed DFA left to right to find the ends. the leftmost start paired with the rightmost end gives you leftmost-longest. two linear DFA scans, no backtracking, no ambiguity.
I'm pretty sure that should say "the leftmost start paired with the leftmost end".
This also implies that the algorithm has to scan the entire input to find the first match, and the article goes on to confirm this. So the algorithm is a poor choice if you just want the first match in a very long text. But if you want all matches it is very good.
I’m pretty sure it shouldn’t, that would give you the leftmost shortest match instead of leftmost longest.
there's an implicit `.*` in front of the first pass but i felt it would've been a long tangent so i didn't want to get into it.
so given input 'aabbcc' and pattern `b+`,
first reverse pass (using `.*b+`) marks 'aa|b|bcc'<-
the forward pass starts from the first match:
'aa->b|b|cc' marking 2 ends
then enters a dead state after the first 'c' and returns the longest end: aa|bb|cc
i hope this explains it better
So, once it gets going, a traditional engine can produce matches iteratively with no further allocation, but RE# requires allocation proportional to the total number of matches. And in return, it's very much faster and much easier to use (with intersection and complement).
It's also beneficial to merge some of the matching locations into ranges where possible, so when `a*` matches a long sequence of '|a|a|a|a|a|', it can be represented as a range of (0,5), we do this to keep the lookaround internal states smaller in the engine.
(I tried to write some pseudocode here but got annoyed dealing with edge cases like zero-length matches at EOF, sorry.)
The post mentions they also have a native library implemented in Rust without dependencies but I couldn't find a link to it. Is that available somewhere? I would love to try it out in some of my projects but I don't use .NET so the NuGET package is of no use to me.
I will open source the rust engine soon as well, some time this month.
As for the benchmarks, it's the fastest for large patterns and lookarounds, where leftmost-longest lets you get away with less memory usage so we don't need to transition from DFA to NFA.
In the github readme benchmarks it's faster than the exponential implementations of .NET Compiled so the 35 000x can be an arbitrary multiplier, you can keep adding alternatives and make it 1000000x.
for a small set of string literals it will definitely lose to Hyperscan and Rust regex since they have a high effort left-to-right SIMD algorithm that we cannot easily use.
I think "simple string literals" undersells it. I think that description works for engines like RE2 or Go's regex engine, but not Hyperscan or Rust regex. (And I would put Hyperscan in another category than even Rust regex.) Granted, it is arguably difficult to be succinct here since it's a heuristic with difficult-to-predict failure points. But something like: "patterns from which a small number of string literals can be extracted."
something i've been also wondering is how does Harry (https://ieeexplore.ieee.org/document/10229022) compare to the Teddy algorithm, it's written by some of the same authors - i wonder if it's used in any engines outside of Hyperscan today.
There's a lot of simple cases where you don't really need a regex engine at all.
integrating SearchValues as a multi-string prefix search is a bit harder since it doesn't expose which branch matched so we would be taking unnecessary steps.
Also .NET implementation of Hyperscan's Teddy algorithm only goes left to right.. if it went right to left it would make RE# much faster for these cases.
One part of this is SIMD algorithms to better compete with Hyperscan/Rust, another is the decades of optimizations that backtracking engines have for short anchored matches for validation.
There's analysis to do for specific patterns so we can opt for specialized algorithms, eg. for fixed length patterns we skip the left-to-right pass entirely since we already know the match start + match length.
Lots of opportunistic things like this which we haven't done. Also there are no statistical optimizations in the engine right now. Most engines will immediately start looking for a 'z' if there is one in the pattern since it is rare.
FYI some code snippets are unreadable in 'light mode' ("what substrings does the regex (a|ab)+ match in the following input?")
Don Syme seem to no longer be acting as the project lead, and I didn't hear of any successor
Compared to most actively developed languages F# look very stale currently
'Twas a bad idea to train LLMs on the corpus of leaky, verbose C and C++ first instead of on these strict, strongly-typed, highly structural languages.
https://martinalderson.com/posts/which-programming-languages...
[1] https://www.sciencedirect.com/science/article/pii/S030439751...
The authors seem to claim linear complexity:
> the result is RE#, the first general-purpose regex engine to support intersection and complement with linear-time guarantees, and also the overall fastest regex engine on a large set of benchmarks
The standard way to do intersection / complementation of regexes with NFAs requires determinization, which causes a huge blowup, whereas for us this is the cost of a derivative.
It is true that we cannot avoid enormous DFA sizes, a simple case would be (.*a.*)&(.*b.*)&(.*c.*)&(.*d.*)... which has 2^4 states and every intersection adds +1 to the exponent.
How we get around this in the real world is that we create at most one state per input character, so even if the full DFA size is 1 million, you need an input that is at least 1 million characters long to reach it.
The real argument to complexity is how expensive can the cost of taking a lazy derivative get? The first time you use the engine with a unique input and states, it is not linear - the worst case is creating a new state for each character. The second time the same (or similar) input is used these states are already created and it is linear. So as said in the article it is a bit foggy - Lazy DFAs are not linear but appear as such for practical cases
Does this imply that the DFA for a regex, as an internal cache, is mutable and persisted between inputs? Could this lead to subtle denial-of-service attacks, where inputs are chosen by an attacker to steadily increase the cached complexity - are there eviction techniques to guard against this? And how might this work in a multi-threaded environment?
Multithreading is generally a non-issue, you just wrap the function that creates the state behind a lock/mutex, this is usually the default.
The subtle denial of service part is interesting, i haven't thought of it before. Yes this is possible. For security-critical uses i would compile the full DFA ahead of time - the memory cost may be painful but this completely removes the chance of anything going wrong.
There are valid arguments to switch from DFA to NFA with large state spaces, but RE# intentionally does not switch to a NFA and capitalizes on reducing the DFA memory costs instead (eg. minterm compression in the post, algebraic simplifications in the paper).
The problem with going from DFA to NFA for large state spaces is that this makes the match time performance fall off a cliff - something like going from 1GB/s to 1KB/s as we also show in the benchmarks in the paper.
As for eviction techniques i have not researched this, the simplest thing to do is just completely reset the instance and rebuild past a certain size, but likely there is a better way.
But you also have to lock when reading the state, not just when writing/creating it. Wouldn’t that cause lock contention with sufficiently concurrent use?
Only when a nonexistent state is encountered during matching it enters the locked region.
[1] Kind of, unless you hit ambiguities that need to be resolved with the maximal munch rule; anyways that’s irrelevant to a single-RE matcher.
[2] In particular, introductions to Brzozowski’s approach usually omit—but his original paper does mention—that you need to do some degree of syntax-tree simplification for the derivatives to stay bounded in size (thus finite in number) and the matcher to stay linear in the haystack.