> These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.
This section is an extremely useful reference
> People must be vouched for before interacting with certain parts of a project (the exact parts are configurable to the project to enforce).
https://github.com/mitchellh/vouch
I think many projects will adopt this instead of allowing everyone / blocking everyone
Many projects have "ai slop" check in place to directly close and ban user if it is "ai slop". Else, it will be hard to handle the velocity of PRs
Will it fix a related but different problem? Likely.
At this point they will probably just fork yet again and maintain some vibe compiler.
Given what you’ve said above it would be an easy task ‘accelerating quality and features exponentially’, so you’ll soon be able to show them (perhaps within days!), the error of their ways.
Please go do it now, we’ll wait.
But I believe it is not the reason Rust adopted this policy, I think they just have a more basal and subjective dislike of AI irrespective of whatever truth you may have just cited.
Rust is already well past 1.0. At best an LLM could discover a vulnerability (and the human using it can file a patch) or can help a human improve ergonomics.
The main problem is that the the problem space is vast and highly interconnected, the LLM needs to reason about the entire language every time it suggest an architectural change, but it can't, so it suggests local changes that make sense to me - a language hobbyist - then runs into much more difficult problems down the road.
Maybe Mythos with a lot of (competent) human hand-holding and pre-design can do it.
I sure hope so. I expect the end result will disprove the following:
> The Rust team will never be able to catch up to them
The AI jackasses have been braying in this key for going on a few years now, and there hasn't been one single time any of this breathless noise has resulted in something meaningfully superior. It's time to put up or shut up. Enough bullshit talk. If you can vibeslop a better Rust (or whatever), JFDI and leave everyone behind.
Oh... I can’t say for certain who wrote it, and I won’t make any definitive claims - personally, I tend to think it was probably mostly written, or at least conceived, by a man - but this sort of phrase… I get a nervous twitch every time I see it, even though it’s actually quite a clever rhetorical device. Hell... Maybe I just need a break; I don’t know, since I’m starting to see LLMs everywhere...
https://github.com/jyn514/rust-forge/blob/llm-policy/src/pol...
It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways:
> The following are allowed. > Asking an LLM questions about an existing codebase. > Asking an LLM to summarize comments on an issue, PR, or RFC...
Like seriously, what's the point of explicitly allowing this? Imagine the opposite were true, you weren't allowed to do this - what would they do? Revert an update because the person later claimed they checked it with an LLM?
The Linux policy on this is much superior and more sensible.
Explicit permission can be useful to preemptively cut off some questions from well meaning people who, acting in good faith, might otherwise pester for clarification (no matter how silly / "obvious" it might otherwise be), or get agitated by misconstruing an all-banned list as being an overly verbose "no LLMs ever" overreach.
> It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways: [...]
Many of us work or have worked in corporate settings where IT takes great pains to help detect and prevent data exfiltration, and have absolutely installed the corporate spyware to detect those kinds of actions when performed on their own closed source codebases. Others rely on the honor system - at least as far as you know - but still ban such actions out of copyright/trade secret concerns. If you're steeped deeply enough in that NDA-preserving culture, a reminder that you've switched contexts might help when common sense proves uncommon.
While nannying can be obnoxious, I'm not sure that having a document one can point to/link/cite, to allay any raised concerns, counts.
I would have LOVED if the university course I took last winter had this. I had to take a very paranoid attitude to what was allowed.
What they're trying to avoid is a lot of unnecessary conflict with zealous anti-AI people calling for your exclusion for admitting to doing these things. There are people who would ban this too.
Imagine if they just say "LLMs are banned" then there's a lot of ambiguity. So they specifically outlined that generative uses of LLMs are banned, and that non-generative ones are not banned (i.e. "allowed").
I think it's a poor choice of words on their part, but it makes sense (considering what their policy is). It's more of a "we're not disallowing use in these particular scenarios, so you can still use LLMs for these if you want". Remember: it's a big project, and if they don't explicitly state something then people will ask and waste everyone's time.