15 years later, Microsoft morged my diagram
939 points by cheeaun 18 hours ago | 357 comments

m12k 15 hours ago
Regarding the original git-flow model: I've never had anyone able to explain to me why it's worth the hassle to do all the integration work on the "develop" branch, while relegating the master/main branch to just being a place to park the tag from the latest release. Why not just use the master/main branch for integration instead of the develop branch - like the git gods intended - and then not have the develop branch at all? If your goal is to have an easy answer to "what's the latest release?", you have the tags for that in any case. Or if you really want to have a whole branch just to double-solve that one use-case, why not make a "release-tags" branch for that, instead of demoting the master/main branch to that role, when it already has a widely used, different meaning?

It's a pity that such a weird artifact/choice has made its way into a branching model that has become so widely implemented. Especially when the rest of it is so sensible - the whole "feature-branch, release-branch, hotfix" flow is IMO exactly right for versioned software where you must support multiple released versions of it in the wild (and probably the reason why it's become so popular). I just wish it didn't have that one weirdness marring it.

reply
iainmerrick 14 hours ago
You’re right. I think what you’re describing is “trunk based development” and it’s much better.

Maybe I’m overly cynical but I think git-flow was popular largely because of the catchy name and catchy diagram. When you point out that it has some redundant or counter-productive parts, people push back: “it’s a successful model! It’s standard! What makes you think you can do better?”

There’s a nice write-up of the trunk-based style at https://trunkbaseddevelopment.com/ that you can point to as something better.

reply
whoknowsidont 9 hours ago
> but I think git-flow was popular largely because of the catchy name and catchy diagram.

It was because Git showed up in the era of SVN / CVS where those branching models were created because of the uh... let's just call it technical mishaps of those source control systems.

Git did not have the hang ups of SVN / CVS / etc but people stuck with what was familiar.

reply
zamalek 7 hours ago
Yup, there would have been much less Git buy-in if it weren't for git flow; people grow incredibly attached to their beloved taxonomies.
reply
throwaway150 6 hours ago
> Yup, there would have been much less Git buy-in if it weren't for git flow

I don't buy this. I've never used git-flow in life. No team I've worked for has ever used git-flow. Yet all of us have been using Git for ages. Git has been hugely successfully independently and different teams follow different Git workflows. Its success has got very little to do with git-flow.

reply
whoknowsidont 6 hours ago
>I don't buy this.

It's not really debatable. Git flow came about because of SVN / CVS practices and was the first and for many still is THE branching model they use.

>Yet all of us have been using Git for ages

You say "all of us" but then you completely ignore the primary branching model the vast, vast majority of people use on Git.

Just for the record, this isn't being stated in support of git-flow it's just a historical fact that's not really debatable.

reply
ncphillips 5 hours ago
> the primary branching model the vast, vast majority of people use on Git.

> it's just a historical fact that's not really debatable.

Over my last 15 years of software dev, I have _never_ heard of anyone actually using Gitflow in their codebase.

I'm not saying you're wrong. My experience is anecdotal. But I don't know why you say it's a "fact". Was there surveys or anything?

reply
zamalek 2 hours ago
I'm not questioning your experience, but how "enterprise" is that experience? Gitflow was no small part of my convincing my company to move off TFVC. I doubt they still use, but it was shallow waters for scared folk.

I strongly doubt that my story, just as much as yours, is unique.

reply
throwaway150 5 hours ago
> It's not really debatable.

Very weird for you to start a reply like this when we are literally debating it.

> You say "all of us"

Yes, I mean those of who don't use git-flow. That's what I meant by "all of us".

> ignore the primary branching model the vast, vast majority of people use on Git.

Do you live in a git-flow bubble or what? I've been using VCS since the dark ages of CVS. Moved to SVN. Mercurial. Git. Never worked in a team using git-flow. Never used git-flow myself. Never met anyone IRL who uses git-flow. I only read about these things on HN and blogs.

What kind of stats do you have to claim that this is the primary branching model. If I go by my experience, it's a minority branching model that only people living within the bubble care about.

> it's just a historical fact that's not really debatable.

What is a historical fact? That people use git-flow. Nobody is contesting that. What I am contesting is that the success of Git is not connected to git-flow like the grand-grand-parent comment said.

reply
disruptiveink 13 hours ago
Correct. If you can always either fix it forwards or roll back, which you should be able to unless you're building software that needs to go out in releases with versions tracked separately that need to keep getting fixes, trunk-based development simplifies everyone's lives greatly.

I've never seen an organisation that insists on release branches and complicated git merge flows to release their web-based software gain any actual benefit from it that isn't dwarfed by the amount of tooling you need to put around it to make it workable to the dev team, and even then, people will routinely screw it up and need to reach out to the 5% of the team that actually understands the system so they can go back to doing work.

reply
QuercusMax 8 hours ago
I've done branchy development to good effect for user-installable software, where we committed to maintain e.g. 3.2.x for a certain time period, so we had to keep release branches around for a long while.

But for continuously deployed SaaS or webapps, there's no point.

reply
plorkyeran 3 hours ago
I've worked on software where we had multiple maintained release branches and we always just worked off master and then cut long-lived release branches from master at some point. Once a branch was cut we'd never merge master into it again and instead backport just specific fixes, which is quite different from git-flow.
reply
dcrazy 4 hours ago
Until you have a customer that must stay on v.previous for extra time for some reason.
reply
QuercusMax 3 hours ago
Well in that case it sounds like you're shipping multiple versioned instances of your software for different clients, which is much closer to shrink-wrapped software than it is to e.g. gmail.
reply
mort96 10 hours ago
If what they described is "trunk based development", then "git flow" is just "trunk based development where the trunk is called develop and there's a branch which always has the latest release". Is that it?
reply
whoknowsidont 9 hours ago
Nope. Gitflow is not trunk based development.
reply
mort96 6 hours ago
Then what's different other than the names of the branches?
reply
choeger 15 hours ago
I am working with main/master for years now, and there's one problem you don't have with develop: Whenever you merge something into master, it kind of blocks the next release until its (non-continuous) QA is done. If your changes are somewhat independent, you can cherry-pick them from develop into master in an arbitrary order and call that a release whenever you want to.
reply
Gigachad 15 hours ago
I worked at a place that had Gitlab review apps set up. Where the QA people could just click a button and it would create an instance of the app with just that PR on it. Then they could test, approve, and kill the instance.

Then you can merge to master and it's immediately ready to go.

reply
ncphillips 5 hours ago
Yeah same. The idea that you'd be merging code to `main` that isn't ready to deploy is crazy to me, but that doesn't mean you need a `develop` and `prod` branch. The main + 1-layer of branches has generally been totally sufficient. We either deploy to branch-preview environment or we just test it locally.
reply
embedding-shape 15 hours ago
> Whenever you merge something into master, it kind of blocks the next release until its (non-continuous) QA is done.

That's what tags are for, QA tests the tagged release, then that gets released. Master can continue changing up until the next tag, then QA has another thing to test.

reply
dsego 12 hours ago
Can I tag a bugfix that goes in after a feature was already merged into main? Basically out of order. Or do I need to tag the bugfix branch, in which case the main branch is no longer the release, so we need to ensure the bugfix ends up in the remote main branch as well as the release. Seems like it could cause further conflicts.
reply
WorldMaker 7 hours ago
git doesn't care what order or from which branch you tag things in. If you need to hotfix a previous release you branch from that previous release's tag, make your bugfix and tag that bugfix then merge the whole thing back to main.

Presumably you are maintaining the ordering of these releases with your naming scheme for tags. For instance, using semver tags with your main release being v1.2.0 and your hotfix tag being v1.2.1, even while you've got features in flight for v1.3.0 or v1.4.0 or v2.0.0. Keeping track of the order of versions is part of semver's job.

Perhaps the distinction is that v1.2.0 and v1.2.1 are still separate releases. A bug fix is a different binary output (for compiled languages) and should have its own release tag. Even if you aren't using a compiled language but are using a lot of manual QA, different releases have different QA steps and tracking that with different version numbers is helpful there, too.

reply
embedding-shape 12 hours ago
I'm not sure what you mean, what does "tag a bugfix", "tag the bugfix branch" or "ensure the bugfix ends up in the remote main branch as well as the release" even mean?

What are you trying to achieve here, or what's the crux? I'm not 100% sure, but it seems you're asking about how to apply a bug fix while QA is testing a tag, that you'd like to be a part of the eventual release, but not on top of other features? Or is about something else?

I think one misconception I can see already, is that tags don't belong to branches, they're on commits. If you have branch A and branch B, with branch B having one extra commit and that commit has tag A, once you merge branch B into branch A, the tag is still pointing to the same commit, and the tag has nothing to do with branches at all. Not that you'd use this workflow for QA/releases, but should at least get the point across.

reply
franktankbank 11 hours ago
It means you need a bugfix on your release and you don't want to carry in any other features that have been applied to master in the meantime.
reply
tlamponi 9 hours ago
In that case one can just branch off a stable-x.y branch from the respective X.Y release tag as needed.

It really depends on the whole development workflow, but in my experience it was always easier and less hassle to develop on the main/master branch and create stable release or fix branch as needed. With that one also prioritizes on fixing on master first and cherry-pick that fix then directly to the stable branch with potential adaptions relevant for the potential older code state there.

With branching of stable branches as needed the git history gets less messy and stays more linear, making it easier to follow and feels more like a "only pay for what you actually use" model.

reply
embedding-shape 11 hours ago
Usually what I've seen is one of two solutions, the former (usually) being slightly favored: A) hide any new feature behind feature flags, separate "what's in the code" from "how the application works" essentially or B) have two branches, one for development (master) and one for production. The production branch is what QA and releasers work with, master is what developers work with, cherry-picking stuff and backporting becomes relatively trivial.
reply
dsego 5 hours ago
We've been using feature flags but mostly for controlling when things get released. But feature flags carry their own issues, they complicate the code, introduce parallel code paths, and if not maintained properly it gets difficult to introduce new features and have everything working together seamlessly. Usually you want to remove the flag soon after release, otherwise it festers. The production branch is also ok, but committing out of order can break references if commits are not in the same order as master, and patching something directly to prod can cause issues with promoting changes from master to prod, it requires some foresight to not break builds.
reply
Izkata 9 hours ago
With (B) you've just reconstructed the part of git-flow that was questioned at the start of this thread. Just switch the two branches from master/production to develop/master.
reply
antonvs 9 hours ago
B is basically Gitflow with different branch names - “one for development” is called develop, “one for production” is called main.
reply
mort96 10 hours ago
What's the difference between what you describe, and continuously merging things into main and cutting releases from a branch called stable?
reply
estimator7292 10 hours ago
They're the same strategy with different branch names.
reply
secretazianman8 3 hours ago
Are you using feature flags in your workflow pattern? These can be used to gate releases into your production environment while still allowing development work to be continuously integrated to trunk without blocking.

This also means that the release to prod happens post-integration by means of turning the feature flag on. Which is arguably a higher quality code review than pre-integration.

reply
globular-toast 12 hours ago
Yes, you have to include QA in the continuous integration process for it to work. That means at any time you can just tag the top of the master branch to cut a release, or do continuous delivery if it makes sense (so no tags at all).

It sounds like you are doing a monorepo type thing. Git does work best and was designed for multiple/independent repos.

reply
WorldMaker 7 hours ago
Even in a monorepo you can tag releases independently in git. git doesn't proscribe any particular version tag naming scheme and stores tags similarly to refs in a folder structure that many (but not all) UIs pay attention to. You can tag `project-a/v1.2.0` and `project-b/v1.2.0` as different commits at different points in the repo as each project is independently versioned.

It makes using `git describe` a little bit more complicated, but not that much more complicated. You just need to `--match project-a/` or `--match project-b/` when you want `git describe` for a specific project.

reply
bandrami 12 hours ago
The model works well if you're developing version 3.2 (which is not ready yet) but also non-trivially maintaining 3.1.
reply
tremon 10 hours ago
Exactly. As soon as you're working with multiple active releases, the branching model becomes a distinction without a difference. You will always be working with multiple (tagged) release branches, a default branch on which developers base their new work, and an integration branch where development work is gathered and tested to cut the next release. Whether the default and integration branch are identical or separate is mostly immaterial to the developer workflow.

The only meaningfully different model is when you have a continuously-releasable trunk and never do fixes on older releases (quite common for internal tools).

reply
jeffwask 8 hours ago
This, the only times I have used this were to patch over other bad decisions like maintaining 3-4 active releases of a SAAS product simultaneously or other decisions that forced us into a complex branching scheme. If you fix the downstream and upstream issues, you can usually simplify down to an easier branching model but if you are managing hotfixes and releases across many versions this works and keeps it sanish.
reply
keithnz 5 hours ago
The branching strategy we use is features are branched off master, as features are finished we then pick what ones we want to bundle into a release, create a release branch from master, merge features into it, we go through QA, when that release is ready, we merge to master. Meanwhile new features are still being worked on based off master. This works really well as it gives you a lot of control over when things get released and manage testing impact / user impact. This also makes it really easy to back out of a feature without it polluting a "develop" branch that other features have branched off. All features are based on code that is actually deployed.
reply
taeric 8 hours ago
The difference here will almost certainly come down to how you release your work? For product based teams that have a very specific place to plant the tag of what was released, development branches reflect their ability to know exactly what has been shipped to customers.

And this is more than just knowing the exact commit. Which, fair, that that is all that you truly need.

Having it on a branch, though, reflects that hot fixes and similar can still be applied, and though the tag will remain at what was released, the branch will be what it currently looks like.

reply
Aerolfos 14 hours ago
It's useful if your integration work takes some time - easy to run into with open source.

Imagine you have multiple contributors with multiple new features, and you want to do a big release with all of them. You sit down a weekend and merge in your own feature branch, and then tell everyone else to do so too - but it's a hobby project, the other guys aren't consistently available, maybe they need two weekends to integrate and test when they're merging their work with everyone else's, and they don't have time during the weekdays.

So, the dev branch sits there for 2-3 weeks gradually acquiring features (and people testing integration too, hopefully, with any fixes that emerge from that). But then you discover a bug in the currently live version, either from people using it or even from the integration work, and you want that fix live during the week (specific example: there's a rare but consistent CTD in a game mod, you do not want to leave that in for several weeks). Well, if you have a branch reflecting the live status you can put your hotfix there, do a release, and merge the hotfix into dev right away.

Speaking of game mods, that also gives you a situation where you have a hard dependency on another project - if they do a release in between your mods releases, you might need to drop a compat hotfix ASAP, and you want a reflection of the live code where you can do that, knowing you will always have a branch that works with the latest version of the game. If your main branch has multiple people's work on it, in progress, that differs from what's actually released, you're going to get a mess.

And sure you could do just feature branches and merge feature branches one by one into each other, and then into main so you never have code-under-integration in a centralized place but... why not just designate a branch to be the place to do integration work?

You could also merge features one by one into main branch but again, imagine the mod case, if the main code needs X update for compatibility with a game update, why do that update for every feature branch, and expect every contributor to do that work? Much better to merge a feature in when the feature is done, and if you're waiting on other features centralize the work to keep in step with main (and the dependency) in one place. Especially relevant if your feature contributors are volunteers who probably wouldn't have the time to keep up with changes if it takes a few weeks before they can merge in their code.

reply
pragma_x 7 hours ago
I can't say that I've used gitflow in hate. That said, I always saw the full complexity of the approach to address tracking multiple concurrent releases of a product. It's extremely uncommon in our increasingly SaaS world, but I imagine having so many branches with commits moving laterally between them to be invaluable for backporting security fixes and the like.

For the rest of us, trunk-based development with feature/fix branches is more than enough.

reply
4ndrewl 3 hours ago
nvie said as much in 2020 and stuck a big disclaimer at the top of the post https://nvie.com/posts/a-successful-git-branching-model/
reply
layer8 14 hours ago
It can be beneficial if there is no mechanism that ensures that develop is always in a working state, but there is one that ensures that master is. The immediate benefit is that a new feature branch can always be started off master from a known-good state.

Of course, there are ways to enforce a known-good state on master without a dedicated develop branch, but it can be easier when having the two branches.

(I just dislike the name “develop”, because branch names should be nouns.)

reply
simianwords 13 hours ago
Prod deployments usually have a tag associated
reply
layer8 11 hours ago
Prod deployment isn’t the same as known-good. The latter can be “passes all automated quality controls”; that doesn’t automatically mean that it’ll be deployed. Release/deploy cadences can be much slower than merge-into-master, and usually depend on actual feature (set) completion.
reply
dec0dedab0de 7 hours ago
having a main branch allows a casual observer(management) to browse your project in the gui and see what is currently live.

I like the opportunity to force a second set of testing, and code review. Especially if the team is big enough that you can have different people doing code review for each branch.

You can also have your CI/CD do longer more thorough testing while merging to main vs development.

If it's a project with a single deployment, version tagging is kind of pointless, it's much easier to just use a branch to reflect what is live, and roll back to a merge commit if you have to. Then you can still merge directly to main in the event of a hotfix.

reply
nazcan 6 hours ago
> having a main branch allows a casual observer(management) to browse your project in the gui and see what is currently live.

I never found this very compelling. What is main in that world is not the source of truth, and it's rare to have a system atomically in one state or the other - but normally there are progressive rollouts. And if you ever need to rollback in production, I assume no one is changing where main is.

> I like the opportunity to force a second set of testing, and code review. Especially if the team is big enough that you can have different people doing code review for each branch.

To be explicit for code review, do you mean there is (1) main, (1) development, and then a bunch feature branches - and that there is review when merging into development and main? Having a two-tiered review process seems extremely difficult to do - versus just having more reviewers on the first merge - especially dealing with merge conflicts and needing to merge again into development.

> You can also have your CI/CD do longer more thorough testing while merging to main vs development.

I think it's fair to do more testing later. I think the equivalent I'm used to (which is pretty close, so not a huge difference), is only building releases from the commit that passed the bigger/slower tests.

But also, assuming there are multiple deployments coming from one repo, if you block merging into main, that means you'd be blocking on all tests passing - while release branches for a given product can select a subset of tests when deciding on release candidates.

> If it's a project with a single deployment, version tagging is kind of pointless, it's much easier to just use a branch to reflect what is live, and roll back to a merge commit if you have to. Then you can still merge directly to main in the event of a hotfix.

I think it's worth maintaining the flexibility of how many releases come from a repo. Needing to fork repos just because you want another deployable release in the future seems painful to me.

reply
dec0dedab0de 5 hours ago
I never found this very compelling. What is main in that world is not the source of truth, and it's rare to have a system atomically in one state or the other - but normally there are progressive rollouts. And if you ever need to rollback in production, I assume no one is changing where main is.

In the scenarios I am thinking of, the only way to rollback production is to update the main branch and redeploy.

But still, it's just the niceness of having the default branch match production or the current release. Even if you're not going through the extra code review or testing, and all you did was automatically point main to the same commit as the latest release tag, it's still nice. Of course, you could have a production branch or whatever, set that as your default, and leave main for development, but the point is the same.

To be explicit for code review, do you mean there is (1) main, (1) development, and then a bunch feature branches - and that there is review when merging into development and main? Having a two-tiered review process seems extremely difficult to do - versus just having more reviewers on the first merge - especially dealing with merge conflicts and needing to merge again into development.

Yes, but merge conflicts are not an issue at all if you don't squash commits on merge, atleast not between development and main. The way we used to do it, was each part of the project had owners with one required to review all changes before merging to development, then any other senior developer could review the merge to main. Though, we would encourage the whole team to review every PR if they had time.

In practice, this was really just a chance to see all the changes going in on this next release.

I think it's worth maintaining the flexibility of how many releases come from a repo. Needing to fork repos just because you want another deployable release in the future seems painful to me.

When the development team is also the operations team it's easier to keep them together and just update the deployment to go to multiple places, which would effectively still be a single deployment.

If they're separate teams, then I would be inclined to give operations it's own repo where they can manage their specific things. With a pipeline that pulls down the artifacts from the development team.

reply
bombcar 8 hours ago
"git-flow" makes a lot more sense when you realize the "develop" branch doesn't have to be a branch "on the git server" and instead is your master branch you're fuddling with.
reply
globular-toast 12 hours ago
Yeah, I actually think that diagram and "git-flow" has caused a lot of harm. It shows a complete misunderstanding of both continuous integration and what tags are for. I've successfully purged git-flow and dragged developers, kicking and screaming, to a simple master branch, tags and maintenance branch model a few times now.
reply
abustamam 10 hours ago
When I first got started programming, the "git flow" method was the one that popped up and was referred to most when I googled how does git work. And so I thought that git flow was the canonical way to use git.

I tried adhering to it at my first job but I guess I didn't understand git flow well enough because people just thought I was making random branches for fun.

reply
UqWBcuFx6NV4r 15 hours ago
[flagged]
reply
m12k 14 hours ago
I'll try to respond to your comment in good faith, even though I find it to have a rather aggressive, ad-homimen tone:

> If this pattern is so pervasive, and so many people care enough to attempt to explain it to you, yet you remain unconvinced, I’m not sure how you reach the conclusion that you are right, and correct, and that it’s such a shame that the world does not conform to how you believe that things should be.

The reason nobody has convinced me otherwise isn't that I haven't listened, but because the people I talked to so far didn't actually have arguments to put forth. They seemed to be cargo-culting the model without thinking about why the branching strategy was what it was, and how that affected how they would work, or the effort that would be put into following each part of the model vs the value that this provides. It seemed to me that the main value of the model to them was that it freed them from having to think about these things. Which honestly, I have no problem with, we all need to choose where to put our focus. But also, all the more reason why I think it's worth caring about the quality of the patterns that these guys follow unquestioningly.

> Besides a bit of a puritan argument about “git gods”, you haven’t really justified why this matters at all, let alone why you care so much about it.

Apart from that (apparently failed) attempt at humor, I did in fact attempt to justify later in my comment why it matters: "instead of demoting the master/main branch to that role, when it already has a widely used, different meaning?" To expand on that, using the same names to describe the same things as others do has value - it lowers friction, allows newcomers (e.g. people used to the github branching model) to leverage their existing mental model and vernacular, and doesn't waste energy on re-mapping concepts. So when the use case for the master/main branch is already well-established, coming up with a different name for the branch you do those things on ("develop") and doing something completely different on the branch called master/main (tagging release commits), is just confusing things for no added benefit. On top of that, apart from how these two branches are named/used, I also argue that having a branch for the latter use case is mostly wasted effort. I'm not sure I understand why it needs to be spelled out that avoiding wasted effort (extra work, more complexity, more nodes in the diagram, more mental load, more things that can go wrong) in routine processes is something worth caring about.

> On the other hand, the model that you are so strongly against has a very easy to understand mental model that is analogous to real-world things. What do you think that the flow in git flow is referring to?

"very easy to understand mental model"s are good! I'm suggesting a simplification (getting rid of one branch, that doesn't serve much purpose), or at least using naming that corresponds with how these branches are named elsewhere, to make it even easier to understand.

You say it's a model that I'm "so strongly against". Have you actually read my entire comment? It says "Especially when the rest of it is so sensible - the whole feature-branch, release-branch, hotfix flow is IMO exactly right for versioned software". I'm not strongly against the model as a whole. I think 80% of it is spot on, and 20% of it is confusing/superfluous. I'm lamenting that they didn't get the last 20% right. I care exactly because it's mostly a good model, and that's why the flaws are a pity, since they keep it from being great.

As for "flow", I believe it refers to how code changes are made and propagated, (i.e. new feature work is first committed on feature branches, then merged onto develop, then branched off and stabilized on a release branch, then merged back to develop AND over onto master and tagged when a release happens). Why do you bring this up? My proposal is to simplify this flow to keep only the valuable parts (new feature work is first committed on feature branches, then merged onto master, then branched off and stabilized on a release branch, then tagged and merged back to master when a release happens). Functionally pretty much the same, there's just one less branch to manage, and develop is called master to match its naming elsewhere.

> I’m sorry that you find git flow so disgusting but I think your self-righteousness is completely unjustified.

Again, I don't know where you get this from. I don't find the model disgusting, I find it useful, but flawed. I don't know why you think suggesting these improvements justifies making remarks about my character.

reply
boonzeet 13 hours ago
Not the original commenter but this felt worth adding to: you mention 'cargo culting', yet there are already two comments raising the core benefit, which is keeping main 'stable and working' while develop stays 'rough and ready'.

A less rigid development branch allows feature branches to be smaller and easier to merge, and keeps developers working against more recent code.

A more locked-down, PR-only main branch enables proper testing before merging, and ensures that the feature and release branches stemming from it start in a cleaner state.

I've worked with both approaches and I'm firmly in the camp of keeping main stable, with a looser shared branch for the team to iterate on.

reply
m12k 13 hours ago
Right, I get what you're saying, but in git-flow, the master branch isn't just "stable", it's "literally the last release we made". Which you can also get from the tags (i.e. checking out master or checking out the highest numbered release version tag will give you exactly the same commit). So I'm not sure I see the functional difference. Either you have "develop is messy, master is stable", or you have "master is messy, latest release tag is stable". I mean, sure, there's a bit of mental work involved in "which of these tags has the highest number". But surely that's less than the work involved in maintaining two long-running branches instead of one? I'm not really arguing for one way of working (or level of stability at integration) or another, I'm arguing that the one that git-flow supports can be implemented in a functionally equivalent, but simpler way, with naming that is more consistent with usage elsewhere.
reply
Animats 16 hours ago
This is so out of hand.

There's this. There's that video from Los Alamos discussed yesterday on HN, the one with a fake shot of some AI generated machinery. The image was purchased from Alamy Stock Photo. I recently saw a fake documentary about the famous GG-1 locomotive; the video had AI-generated images that looked wrong, despite GG-1 pictures being widely available. YouTube is creating fake images as thumbnails for videos now, and for industrial subjects they're not even close to the right thing. There's a glut of how-to videos with AI-generated voice giving totally wrong advice.

Then newer LLM training sets will pick up this stuff.

"The memes will continue" - White House press secretary after posting an altered shot of someone crying.

reply
pjc50 15 hours ago
The war on facts continues. Facts are hard, they require a careful chain of provenance. It's much cheaper to just make up whatever people want to hear, safe in the knowledge that there will never be any negative consequences for you. Only other people, who aren't real anyway.
reply
co_king_5 12 hours ago
the other people are real until they say something i don't like
reply
Andrex 9 hours ago
The other people aren't real unless I'm forced to acknowledge their existence, often through violent but avoidable means.
reply
oytis 10 hours ago
Youtube has recently recommended me a video of Feynman allegedly explaining why we couldn't go to Mars and back. I am normally on Youtube for something specific and don't follow recommendations, but hey, it's Feynman and I haven't seen it before, so I had to watch. After a few seconds it has become very clear that the video is totally fake. Then I started digging, and it turned out that both voice and the text it says are fake too. According to the "authors" it was "based on Feynman's work", which is his whole physics course.
reply
nancyminusone 8 hours ago
YouTube must be absolutely flooded with this stuff.

I clicked on one about Henry the 8th, which is a story Ive heard heard 100 times but whatever. It started out normal enough, then claimed he started carrying around a staff with a human skull on the top near the end. Made up artifacts and paintings.

The most egregious has to be the "World War II mechanic fixes entire allied plane arsenal with piece of wire" category. I've come across a couple dozen of these. Completely fabricated events and people that never seem to have existed.

reply
PaulDavisThe1st 7 hours ago
> YouTube must be absolutely flooded with this stuff.

I don't know what the current upload rate to YT is, but this seems unlikely. Despite the reckless and insane energy consumption associated with generative visual and audio art forms, there's no way there's enough power available for generative stuff to overwhelm the "actually recorded digital video" uploads.

Are there some niches on YT where this is true? Seems possible. YT overall? Nah.

reply
nancyminusone 6 hours ago
If my toilet overflows and starts leaking raw sewage into my bathroom, I don't tend to then go "well, at least the rest of my house is fine proportionally".

Most of these kind of videos aren't fully SORA level AI anyway, they just use ChatGPT to make up a fake story and script they would otherwise have to make up themselves, which is much faster, and increases the chances one of them gets picked up by the algorithm and generates a few bucks in ad revenue.

reply
PaulDavisThe1st 6 hours ago
> If my toilet overflows and starts leaking raw sewage into my bathroom, I don't tend to then go "well, at least the rest of my house is fine proportionally".

Sure. But if you live in a multi-apartment complex and someone's toilet on the far side of the complex is overflowing, you don't say "my apartment is flowing with raw sewage".

Maybe your part of the complex (YT) is drowning in raw sewage, mine is not, and I'm vaguely confident that the complex (YT) is large enough that at this point in time, most parts of it are still functioning "as intended".

reply
anon7000 5 hours ago
I think YT Shorts IS overwhelmed with complete garbage. There are lots of great channels I watch that don’t have issue.

But YT shorts is the one place on YT that tries to frequently show you new uploads and stuff outside of your normal algorithm, and there is so much AI on there.

reply
dh2022 8 hours ago
One of these videos was referencing a problem with one of the Mars landers. Feynman died in 1988, long before the landers were even on the drawing board.
reply
nxobject 16 hours ago
> recently saw a fake documentary about the famous GG-1 locomotive

It wouldn’t happen to be a certain podcast about engineering disasters, now, would it?

reply
appointment 12 hours ago
Not a patron, so I haven't seen the whole video, but I don't think Rocz would use AI for a video about his beloved Pennsylvania Railroad.
reply
tovej 14 hours ago
Well there's your problem? That one always seemed very well researched to me.
reply
rmunn 17 hours ago
Similar story. I'm American but work and live outside the US, so I don't know how likely this would be if I had ordered from Amazon. But I ordered a rug for my sons' room from this country's equivalent to Amazon (that is, the most popular order-online-and-we-ship-to-you storefront in this country), and instead of what I ordered (a rug with an image showing the planets, with labels in English) I got an obviously AI-generated copy of the image, whose letters were often mangled (MARS looked like MɅPS, for example). Thankfully the storefront allowed me to return it for a refund, I ordered from a different seller on the second try, and this time I received a rug that precisely matched the image on the storefront. But yes, there are unscrupulous merchants who are using AI to sloppily copy other people's work.
reply
fnands 15 hours ago
Another similar story: My aunt passed away last year, and an acquaintance of my cousin sent her one of those "hug in a box" care packages you can buy off Amazon.

Except when it was delivered, this one said "hug in a boy" and "with heaetfelt equqikathy" (whatever the hell that means). When we looked up the listing on Amazon it was clear it was actually wrong in the pictures, just well hidden with well placed objects in front of the mistakes. It seems like they ripped off another popular listing that had a similar font/contents/etc.

Luckily my cousin found it hilarious.

reply
idop 6 hours ago
Reminds me when one Valentine's Day or whatever a new booth popped up at the mall where my gym was. They sold these nice heart-shaped chocolate boxes. I bought one for my sister. When she opened it, she found one piece of chocolate, and the rest of the box was filled with blocks of Styrofoam... The next day the booth was gone.
reply
csours 5 hours ago
Damn, that sounds like a bit that would be cut from a romantic comedy for being too on the nose.
reply
coldpie 10 hours ago
I've gone back to shopping pretty much exclusively offline. Shifting through the garbage was too much work even before the AI slop flood. I'd rather pay a little extra to a local retailer so I know what I'm actually buying because it's right there on the shelf in front of me.
reply
nippoo 17 hours ago
They've taken it down now and replaced with an arguably even less helpful diagram, but the original is archived: https://archive.is/twft6
reply
yoz-y 16 hours ago
Wow it’s even worse than I thought. I thought that convictungly morhing would be the only problem. The nonsense and inconsistent arrowheads, the missing annotations, the missing bubbles. The “tirm” axis…

That this was ever published shows a supreme lack of care.

reply
quietbritishjim 16 hours ago
The turn axis is great! Not only have they invented their own letter (it's not r, or n, or m, but one more than m!), it points the wrong way.
reply
shit_game 16 hours ago
Lots of the AIisms with letters remind me of tom7's SIGBOVIC video Uppestcase and Lowestcase Letters [advances in derp learning]

https://www.youtube.com/watch?v=HLRdruqQfRk

reply
leni536 11 hours ago
It's like the Pokémon evolution of n through m, we need to notify the Unicode Consortium.
reply
duxup 7 hours ago
It really is wild / telling how fundamentally AI can screw up what seems like just basics like ... an arrow.
reply
shaky-carrousel 15 hours ago
And that's what they dared to show to the public. I shudder thinking about the state of their code...
reply
heresie-dabord 11 hours ago
This passage from the post by the original creator of the diagramme summarises our Bruh New World:

"What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own. This isn't a case of being inspired by something and building on it. It's the opposite of that. It's taking something that worked and making it worse. Is there even a goal here beyond "generating content"?

reply
zephen 16 hours ago
Is it truly possible to make GitFlow look worse than reality?
reply
fooyc 10 hours ago
Apparently the new diagram is now a rip off of another one from Atlassian: https://bsky.app/profile/vurobinut.bsky.social/post/3mf52hmw...
reply
franktankbank 10 hours ago
TIMMMAYYY
reply
rzmmm 16 hours ago
It looks like typical "memorization" in image generation models. The author likely just prompted the image.

The model makers attempt to add guardrails to prevent this but it's not perfect. It seems a lot of large AI models basically just copy the training data and add slight modifications

reply
pjc50 15 hours ago
Remember, mass copyright infringement is prosecuted if you're Aaron Schwartz but legal if you're an AI megacorp.
reply
coldpie 10 hours ago
> It seems a lot of large AI models basically just copy the training data and add slight modifications

Copyright laundering is the fundamental purpose of LLMs, yes. It's why all the big companies are pushing it so much: they can finally freely ignore copyright law by laundering it through an AI.

reply
jimmaswell 7 hours ago
> It seems a lot of large AI models basically just copy the training data and add slight modifications

This happens even to human artists who aren't trying to plagiarize - for example, guitarists often come up with a riff that turns out to be very close to one they heard years ago, even if it feels original to them in the moment.

reply
jezzamon 17 hours ago
"continvoucly morged" is such a perfect phrase to describe what happened, it's poetic
reply
alex_suzuki 17 hours ago
It's the sound of speaking when someone is stuffing AI down your throat.
reply
bw86 15 hours ago
I am waiting for Raymond Chen to post a "Microspeak: Morged" blog post.
reply
ChrisArchitect 17 hours ago
Was reading the word morged thinking it was some new slang I hadn't heard of. Incredible.
reply
nvader 16 hours ago
If it wasn't before, it will be now.
reply
thebruce87m 16 hours ago
I propose:

Morge: when an AI agent is attempting to merge slop into your repo.

reply
Balinares 16 hours ago
Lifehack: you can prevent many morges by banning user claude on GitHub. Also then GitHub will also tell you when a repo was morged up.

Do your part to keep GitHub from mutating into SourceMorge.

reply
hiccuphippo 7 hours ago
But it was created with a definition already: When an AI agent takes your work and regurgitates a worse version of it.
reply
arduanika 14 hours ago
Or something more general, like when a concept or diagram gets pulled into the AI's rough knowledge base, but it completely misses the point and mangles it.

Or, alex_suzuki's colorful definition.

But really, whoever goes to Urban Dictionary first gets to decide what the word means. None of the prior definitions of "morg" has anything to do with tech.

reply
adityaathalye 17 hours ago
Same! I was about to go duck-searching for meaning, but thanks to jezzamon for pointing it out.

brb, printing a t-shirt that says "continvoucly morged"

reply
kuerbel 16 hours ago
You could add one of those Microslop memes that are going around.
reply
jacquesm 14 hours ago
Missed opportunity: 'morgued'.
reply
arduanika 14 hours ago
Decades upon decades of hard work by public contributors -- open source code, careful tech blogging, painstaking diagrams -- all of it will be assimilated without credit or accuracy into the morg.

Resistance is futile.

reply
jimmaswell 7 hours ago
Good - we've been building the seed corpus for AI the past 50 years, and all this manual work now becomes exponentially more useful to others who get to build amazing things without all the tedium. I'm personally thrilled if my code made it in to the machine to help others. We laid train tracks by hand so that they could invent a machine to do it and we can focus on the destination.

I've been coding for over a decade, and I've built some great things, but the slow, careful, painstaking drudge-work parts were always the biggest motivation-killers. AI is worth it at any cost for removing the friction from these parts the way it has for me. Days of work are compressed into 20 minutes sometimes (e.g. convert a huge file of Mercurial hooks into Git hooks, knowing only a little about Mercurial hooks and none about Git hooks re: technical implementation). Donkey-work that would serve no value wasting my human time and energy on when a machine can do it, because it learned from decades of examples from the before-times when people did this by hand. If some people abuse the tools to make a morg here and there, so be it; it's infinitely worth the tradeoff.

reply
filleduchaos 4 hours ago
IMO this would be a much more sensible reply to a different post, not one about a chart that unironically contains the words "continvoucly morged"
reply
arduanika 3 hours ago
Yeah, I felt kind of bad that he gave me such an earnest, thought-out reply to what was essentially a stupid morg/borg joke. But his final sentence suggests that he at least got my joke.

(I don't entirely agree with him, but I upvoted for at least trying to get us back on topic!)

reply
jimmaswell 12 minutes ago
Yeah, I knew it was more of a joke comment, I just wanted to put down some thoughts I'd been forming for a while.
reply
FeistySkink 17 hours ago
Part of the VC/CM pipeline.
reply
ares623 17 hours ago
"Babe, wake up. New verb for slop just dropped."

It's a perfectly cromulent word.

reply
arduanika 14 hours ago
Quiet! MSFT's damage control team does not want us to embiggen the incident.
reply
Goofy_Coyote 13 hours ago
What are they going to do? continvoucly morge my tirm?
reply
hansmayer 16 hours ago
This is hilarious actually. I am starting to lean into "AI-dangerous" camp, but not because the chatbot will ever become sentient. Its precisely because of increasingly widespread adoption of un-reliable tools by the incompetent but self-confident Office Worker (R).
reply
pjc50 15 hours ago
Automatic Soldier Sveijk.
reply
cjs_ac 14 hours ago
The weakest point in any computer system is the bag of meat operating the thing.
reply
nananana9 14 hours ago
Can we stop calling humans "bags of meat"?
reply
cjs_ac 13 hours ago
I've tried, but have been unable to do so. I think it's a limitation of my meatness.
reply
krapp 14 hours ago
The accepted term is "ugly bags of mostly water."
reply
worble 13 hours ago
Explanation: It's just that... you have all these squishy parts, master. And all that water! How the constant sloshing doesn't drive you mad, I have no idea.
reply
ndsipa_pomu 14 hours ago
Why not? They're made out of meat!

https://www.youtube.com/watch?v=7tScAyNaRdQ

reply
nicbou 13 hours ago
I don't think it's a matter of competence or confidence. It's more about indifference. AI supercharged the bullshit jobs ot should have displaced.
reply
Henchman21 5 hours ago
When over half the population have bullshit jobs putting them all out of work seems inadvisable. Doubly so given the current political powder-keg we currently exist in.
reply
anonymous908213 17 hours ago
Microsoft employee (VP of something or other, for whatever Microsoft uses "VP" to mean) doing damage control on Bluesky: https://bsky.app/profile/scott.hanselman.com/post/3mez4yxty2...

> looks like a vendor, and we have a group now doing a post-mortem trying to figure out how it happened. It'll be removed ASAFP

> Understood. Not trying to sweep under rugs, but I also want to point out that everything is moving very fast right now and there’s 300,000 people that work here, so there’s probably be a bunch of dumb stuff happening. There’s also probably a bunch of dumb stuff happening at other companies

> Sometimes it’s a big systemic problem and sometimes it’s just one person who screwed up

This excuse is hollow to me. In an organization of this size, it takes multiple people screwing up for a failure to reach the public, or at least it should. In either case -- no review process, or a failed review process -- the failure is definitionally systemic. If a single person can on their own whim publish not only plagiarised material, but material that is so obviously defective at a single glance that it should never see the light of day, that is in itself a failure of the system.

reply
HelloNurse 15 hours ago
> "everything is moving very fast"

Then slow down.

With this objective lack or control, sooner or later your LLM experiments in production will drive into a wall instead of hitting a little pothole like this diagram.

reply
embedding-shape 15 hours ago
And at the same time, they have time to quickly brush it off with "looks like a vendor" even though people are still investigating. Yes, we can see it's moving really fast, probably "move fast break things" been infecting Microsoft, users are leaving Microsoft behind because everything is breaking then clueless VPs blame it on moving too fast?
reply
leni536 12 hours ago
- Put on your seatbelts, man!

- I can't, moving too fast!

reply
tremon 10 hours ago
"Driving into a wall" is still a positive outcome. It's just as likely to drive into a crowd.
reply
HelloNurse 4 hours ago
Serious loss of life is a plausible LLM outcome, particularly for Microsoft who does both operating systems (incidents can be much worse than the Crowdstrike bricking) and chatbot assistants that can offer lethal advice. Catastrophic property damage is hopefully more likely.
reply
wiseowise 14 hours ago
Jokes on you, I’ll cash out by then and move to the next gig.
reply
hansmayer 15 hours ago
> This excuse is hollow to me. In an organization of this size, it takes multiple people screwing up for a failure to reach the public, or at least it should.

Completely with you on this, plus I would add following thoughts:

I don't think the size of the company should automatically be a proxy measure for a certain level of quality. Surely you can have slobs prevailing in a company of any size.

However - this kind of mistake should not be happening in a valuable company. Microsoft is currently still priced as a very valuable company, even with the significant corrections post Satyas crazy CapEx commitments from 2 weeks ago.

However it seems recently the mistakes, errors and "vendors without guidelines" pile up a bit too much for a supposedly 3-4T USD worth company, culminating in this weird random but very educational case. If anything, it's indicator that Microsoft may not really be as valuable as it is currently still perceived.

reply
p_ing 16 hours ago
You’re incorrect on how the publishing process works. If a vendor wrote the document, it has a single repo owner (all those docs are in github) that would need to sign off on a PR. There isn’t multiple layers or really any friction to get content on learn.msft.
reply
anonymous908213 15 hours ago
I suggested that if there is no review process, it is a systemic issue, and that if there is a review process that failed to catch something this egregious, it is a systemic issue. My supposition is that regardless of how the publishing process works, there is a systemic failure here, and I made no claims as to how it actually works, so I'm not sure where the "you're incorrect on how it works" is coming from.
reply
p_ing 15 hours ago
You said it takes multiple people screwing up, implying that publishing content had multiple gates/reviewers.

It doesn’t.

reply
AlienRobot 15 hours ago
But if there are no gates, doesn't that mean the people who should have put the gates in there screwed up?
reply
ryandrake 10 hours ago
There have been no Gates at Microsoft for a long time.
reply
p_ing 14 hours ago
There is no singular publishing org at MSFT. Each product publishes its own docs, generally following a style guide. But the doc process is up to the doc owner(s).
reply
GrinningFool 8 hours ago
That seems to further make the case that it's a systemic problem.

The organization would have more guardrails in place if it prioritized "don't break things" over "move fast".

reply
dxdm 14 hours ago
I think you're barking up the wrong tree here.
reply
p_ing 9 hours ago
What?

This is how it works. There are too many people here like the op that make assumptions on what the process is/should be.

reply
dxdm 5 hours ago
My dog does this thing where she picks a stick and gets you to pull on it, and she will pull on her end, too. She gets very focused on it. Pulling on the stick is the most important thing to her in that moment, when in fact it's just a stick she chose to turn into this tug of war.

That's not entirely unlike what you're doing here. You latched onto a misunderstanding of OP's intent, and by making a thing out of it got people to pull back, and now you also keep tugging on your end.

Except she does it on purpose and enjoys it, while I think you did it inadvertently and you do not seem that happy. But then, you're not a dog, of course.

You could stop pulling on the stick. I do enjoy these doggy similes, though. :)

reply
arduanika 4 hours ago
This is a perfect description. I've probably been the dog at some point.

p_ing, see my nearby comment about what we mean by "multiple". Does that comment make any false "assumptions"? Or, is it you who are mistaken, persistently failing to understand what your interlocutors are saying?

reply
dxdm 4 hours ago
It can be hard to resist.
reply
anonymous908213 8 hours ago
There is no such thing as "making an assumption" on what a process "should be". I am asserting what it should be. A multi-trillion dollar company should absolutely have a robust review process in place. If one single person can submit plagiarised and defective material onto an official platform that implicates the company as a whole in copyright infringement, management has failed, ergo multiple people have failed, ergo the failure is systemic.

It is extremely well-known that individual humans make mistakes. Therefore, any well-functioning system has guards in place to catch mistakes, such that it takes multiple people making mistakes for an individual mistake to cascade to system failure. A system that does not have these guards in place at all, and allows one individual's failure to immediately become a system failure, is a bad system, and management staff who implement bad systems are as responsible for their failure as the individual who made the mistake. Let us be grateful that you do not work in an engineering or aviation capacity, given the great lengths you are going to defend the "correctness" of a bad system.

reply
arduanika 14 hours ago
[flagged]
reply
RobotToaster 15 hours ago
I've seen better review processes in hobby projects
reply
HelloNurse 15 hours ago
Neither deadlines nor cheap work for hire help any sort of review process, while an hobby project is normally done by someone who cares.
reply
scwoodal 15 hours ago
This is correct. It just takes one person to review it and you’re good to go.

There’s also a service that rates your grammar/clarity and you have to be above a certain score.

reply
bravetraveler 15 hours ago
I'll quote the relevant part of the parent post:

> that is in itself a failure of the system

... and add some Beer flavor: POSIWID (the purpose of a system is what it does)

reply
nxobject 16 hours ago
A postmortem for that but not Copilot in notepad.exe? Priorities…
reply
InvisibleUp 4 hours ago
I’d also love a post-mortem on their guide to pirating the entire Harry Potter series for AI use. (https://devblogs.microsoft.com/azure-sql/langchain-with-sqlv...)

I’ve lost trust in anything Microsoft publishes anymore.

reply
anonymous908213 2 hours ago
Amazing.
reply
tabs_or_spaces 17 hours ago
An entire post mortem for a morged diagram is wild
reply
yborg 17 hours ago
post morgem
reply
Etheryte 16 hours ago
reply
zuminator 13 hours ago
*post morgem ti൬.
reply
Andrex 8 hours ago
You could say the original author has enjoyed some post-morgem clarity.
reply
patapong 15 hours ago
Morgem? I barely know 'em!
reply
batisteo 16 hours ago
Right to morgue
reply
adityaathalye 17 hours ago
Oldest trick in the book... Shoot the vendor.
reply
prmoustache 16 hours ago
> In either case -- no review process, or a failed review process -- the failure is definitionally systemic.

Ortho and grammar errors should have been corrected, but do you really expect a review process to identify that a diagram is a copy from another one some rando already published on the internet years ago?

reply
pointlessone 16 hours ago
It’s not just a copy. It’s a caricature of a copy with a plenty of nonsense in it: typos and weird “text”, broken arrows, etc. Even a cursory look gives a feeling that something’s fishy.
reply
tharos47 15 hours ago
Weird text was already deemed acceptable by microsoft in their documentation as they machine translated most screenshots instead of recreating them in different locales, leading to the same problems as this image.
reply
toong 16 hours ago
"Legal reviewed it and did not flag any issues!"
reply
kuhaku22 15 hours ago
This is the same Microsoft that promised to indemnify any of its customers sued over copyright lawsuits as a result of using its AIs. [0] So I'm sure legal reviewed it the same way, saying "Yep, our war chest is still ample".

[0]: https://www.reuters.com/technology/microsoft-defend-customer...

reply
logifail 16 hours ago
Shouldn't "where are we sourcing our content" be part of any publication review process?
reply
bravetraveler 12 hours ago
The Large Laundering Machine, sorry, I mean the Large Language Model! Provenance? Where's that?
reply
sznio 16 hours ago
No. I'd expect that "continvouclous morging" gets caught.
reply
clort 16 hours ago
plenty of people on the internet recognised it immediately, so sure, he may have been a rando when he created it, but not so much 15 years later..
reply
Freak_NL 16 hours ago
Just that tiny image on his blog was enough for me to go "oh yeah, I used his diagram to explain this type of git workflow to colleagues a decade ago". Someone should have spotted that right away.
reply
p_ing 14 hours ago
Did the one MSFT employee that “reviewed” it know of this image? If not, it doesn’t matter how many people “on the Internet” recognized this image.

I’ll never understand the implied projection.

(I don’t think this was reviewed closely if at all)

reply
mcv 10 hours ago
I would hope that the person who reviews their training on gitflow, knows something about gitflow. And if you know something about gitflow, it's not that strange to expect to recognise the most iconic gitflow diagram.

But even if you don't recognise the original, at least you should be able to tell that the generated copy is bullshit.

reply
p_ing 9 hours ago
Again, I don't think this was reviewed. It was an assignment to a vendor 'write document and I'll hit publish'. There's a great chance the MSFT document _owner_ has no experience in the relevant area.
reply
michaelt 15 hours ago
Here is the original: https://nvie.com/posts/a-successful-git-branching-model/

Here is the slop copy: https://web.archive.org/web/20251205141857/https://learn.mic...

The 'Time' axis points the wrong way, and is misspelled, using a non-existent letter - 'Tim' where the m has an extra hump.

It's pretty clear this wasn't reviewed at all.

reply
beart 11 hours ago
I don't think the characterization of this being a diagram from "some rando" is accurate or fair.

The original content is highly influential... which should be self-evident by the fact it is being reproduced verbatim ten years later, and was immediately recognized.

reply
NekkoDroid 11 hours ago
> but do you really expect a review process to identify that a diagram is a copy from another one some rando already published on the internet years ago?

We aren't talking about just some random image from some random blog. The article we are talking about is about a specific topic, which when searched online one of the first is the article containing the original image (at least for google, bing seems to be really struggling to give me the article but under images it is again the first).

I would cut some slack if this were a really obscure topic almost noone talks about, but it's been a thing talked about in the programmer space for ages.

reply
ahoka 15 hours ago
Yes. This is expected at any serious company as intellectual property violations can have serious consequences.
reply
GrinningFool 8 hours ago
I would personally expect review to evaluate for correctness. Such a review would have stopped this from being published. This diagram as published is literal nonsense.
reply
computerfriend 11 hours ago
Yes? It's a famous diagram, at least in the world of Git workflows, so I would expect a reviewer of Microsoft's Git workflow documentation to be familiar with it.

(But the main issue is that the diagram is slop, not that it's a copy.)

reply
xxr 16 hours ago
Yeah, isn't this why we're told everything "moves so much slower at a bigco" than at a startup?
reply
flurdy 13 hours ago
LOL, calling Scott Hanselman a 'VP of something' is funny. Been listening to his stuff for years, even when I despised MS. Always seems genuinely nice. Probably one of the main reasons I these days have a more positive image of Microsoft.
reply
16bytes 5 hours ago
I thought the same thing. Scott is basically CEO of devrel at Microsoft.

Maybe we're of a different age to remember when Scott was super influential as a blogger / conference speaker, but even now he's not some random VP.

reply
qingcharles 6 hours ago
Scott is definitely one of the good guys.
reply
12_throw_away 51 minutes ago
I don't this person, but immediately trying to foist blame for a really embarrassing screwup onto a "vendor" does not really sound like "good guy" behavior to me?
reply
theolivenbaum 16 hours ago
Seems like this is going to be the year of AI slop being released everywhere by Microsoft. Just wish they'd put as much effort into a post morten for this one as they're doing for a diagram on a blog post https://github.com/microsoft/onnxruntime/issues/27263#issuec...
reply
mcv 10 hours ago
The VP blames a vendor of course, but didn't Microsoft recently announce they were going to vibe code everything? Because this image looks like it comes from the kind of company that thinks it can vibe code everything.
reply
Anon4Now 15 hours ago
> everything is moving very fast right now

Now that's an interesting comment for him to include. The cynic in me could find / can think of lots of reasons from my YouTube feed as to why that might be so. What else is going on at Microsoft that could cause this sense of urgency?

reply
BearOso 9 hours ago
From the beginning, one of the advertising tricks they have used for AI is FOMO. I presume that is so they can sell you as much of it as they can before you realize its flaws.

Everybody's so worried about getting in on the ground floor of something that they don't even imagine it could be a massive flop.

reply
mcny 14 hours ago
My guess is there is some communication going out to every "manager", even the M1, that says this is your priority.

For example, I know of an unrelated mandate Microsoft has for its management. Anything security team analysis flags in code that you or your team owns must be fixed or somehow acceptably mitigated within the deadline specified. It doesn't matter if it is Newton soft json being "vulnerable" and the entire system is only built for use by msft employees. If you let this deadline slip, you have to explain yourself and might lose your bonus.

Ok so the remediation for the Newton soft case is easy enough that it is worth doing but the point is I have a conspiracy theory that internally msft has such a memo (yes, beyond what is publicly disclosed) going to all managers saying they must adopt copilot, whatever copilot means.

reply
reisse 15 hours ago
> This excuse is hollow to me. In an organization of this size, it takes multiple people screwing up for a failure to reach the public, or at least it should.

Only if this is considered a failure.

Native English speakers may not know, but for a very long time (since before automatic translation tools became adequate) pretty much all MSFT docs were machine translated to the user agent language by default. Initially they were as useless as they were hilarious - a true slop before the term was invented.

reply
thunfischtoast 17 hours ago
Microsoft seems to have thrown quality assurance overboard completely. Vibe generate everything, throw it at a wall, see what sticks. Tech bros are so afraid of regulation they even drop regulation inside their own companies. (just kidding)
reply
nhinck2 16 hours ago
It's not just throwing QA out, they are actively striving for lower quality because it saves money.

They're chasing that sweet cost reduction by making cheap steel without regard for what it'll be used for in the future.

reply
bonesss 15 hours ago
Just a thought: the timeline of the vibe techs rolling out and the timeline of increasing product rot, sloppiness, and user-hostile “has anyone ever actually used this shit!?!” coming out of MS overlap.

Vibing won’t help out at all, and years from now we’re gonna have project math on why 10x-LLM-ing mediocre devs on a busted project that’s behind schedule isn’t the play (like how adding more devs to a late project generally makes it more late). But it takes years for those failures to aggregate and spread up the stack.

I believe the vibing is highlighting the missteps from the wave right before which has been cloud-first, cloud-integrated, cloud-upselling that cannibalized MS’s core products, multiplied by the massive MS layoff waves. MS used to have a lot of devs that made a lot of culture who are simply gone. The weakened offerings, breakdown of vision, and platform enshittification have been obvious for a while. And then ChatGPT came.

Stock price reflects how attractive stocks are for stock purchasers on the stock market, not how good something is. MS has been doing great things for their stock price.

LLMs make getting into emacs and Linux and OSS and OCaml easier than ever. SteamOS is maturing. Windows Subsytem for Linux is a mature bridge. It’s a bold time for MS to be betting on brand loyalty and product love, even if their shit worked.

reply
7bit 16 hours ago
Any excuse that tries to play down its own fault by pointing out other companies also have faults, is dishonest.

And that's exactly what happened here.

reply
tombert 15 hours ago
Is there a single thing that Microsoft doesn’t half-ass? Even if you wanted to AI generate a graph, how hard is it to go into Paint or something and fix the test?

I have been having oodles of headaches dealing with exFAT not being journaled and having to engineer around it. It’s annoying because exFAT is basically the only filesystem used on SD cards since it’s basically the only filesystem that’s compatible with everything.

It feels like everything Microsoft does is like that though; superficially fine until you get into the details of it and it’s actually broken, but you have to put up with it because it’s used everywhere.

reply
TacticalCoder 13 hours ago
> Is there a single thing that Microsoft doesn’t half-ass?

Nope.

TFA writes this: "The AI rip-off was not just ugly. It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy".

But I disagree: it's classic Microsoft.

> I have been having oodles of headaches dealing with exFAT not being journaled and having to engineer around it. It’s annoying because exFAT is basically the only filesystem used on SD cards since it’s basically the only filesystem that’s compatible with everything.

I hear you. exFAT works on Mac, Linux and Windows. I use it too, when forced. Note that bad old vfat also still works everywhere

reply
tombert 48 minutes ago
Yeah, I realize other FAT systems work elsewhere too but they're even worse. exFAT is the best portable filesystem.

I really wish the industry had decided on something journaled, as it would make everything better and lead to fewer corrupted files but Microsoft has decided we can't have nice things.

reply
cwal37 17 hours ago
LinkedIn is also a great example of this stuff at the moment. Every day I see posts where someone clearly took a slide or a diagram from somewhere, then had ChatGPT "make it better" and write text for them to post along with it. Words get mangled, charts no longer make sense, but these people clearly aren't reading anything they're posting.

It's not like LinkedIn was great before, but the business-influencer incentives there seem to have really juiced nonsense content that all feels gratingly similar. Probably doesn't help that I work in energy which in this moment has attracted a tremendous number of hangers-on looking for a hit from the data center money funnel.

reply
marginalia_nu 13 hours ago
Yeah I've been collecting some of the weirdest ones I've seen floating by. It's really the only thing that has me visiting linkedin.

https://www.marginalia.nu/junk/linked/games.jpeg

https://www.marginalia.nu/junk/linked/json.png

https://www.marginalia.nu/junk/linked/syntax.png

(and before anyone tells me to charge my phone, I have one of those construction worker phones with 2 weeks battery. 14% is like good for a couple of days)

reply
gzread 11 hours ago
The red apple streams one is good. It shows how developers chase shiny new stuff with no respect for fundamentals. They will say it's less code, and then show you more code.
reply
BalinKing 9 hours ago
The apples one is LLM nonsense: the left example doesn’t include any code for the loop, whereas the streams version actually is iterating over a collection.

Regardless, FP-style code isn’t “shiny new stuff”—it’s been around for decades in languages like Lisp or Haskell. Functional programming is just as theoretically “fundamental” as imperative programming. (Not to mention that, these days, not even C corresponds that closely to what’s actually going on in hardware.)

reply
ckcheng 9 hours ago
>> apples.stream()

>> .filter (a -} ajsRed());

>> .forEach(giveApple); [sic]

> The red apple streams one is good. It shows how developers chase shiny new stuff with no respect for fundamentals.

The problem isn’t streams, it’s slop.

reply
g947o 13 hours ago
Care to explain the last one? The presentation is weird and stupid, but I don't see any obvious (technical) issue other than the missing bracket on the left, unlike the first two
reply
marginalia_nu 12 hours ago
Iterative example doesn't iterate, mismatches parentheses and brackets. Because of this, the iterative example is shorter and simpler than the "short & simple" lambda example.

Lambda example is to the best of my parsing ability this:

  apples.stream()
    .filter(a -λ a.isRed());  // <-- note semicolon
    .forEach(giveApple);
Should be

  apples.stream()
    .filter(a -> a.isRed()) // or Apple::isRed
    .forEach(a -> giveApple(a)); // or this::giveApple
It's also somewhat implied that lambdas are faster, when they're generally about twice as slow as the same code written without lambdas.
reply
OskarS 10 hours ago
It's interesting to see how LLMs make mistakes sometimes: replacing `->` with `-λ` because arrow sort-of has the same meaning as lambdas in lambda calculus. It's like an LLM brain fart replacing something semantically similar but nonsensical in context.
reply
marginalia_nu 10 hours ago
Probably morphed > into λ because they're similar shapes, and lambda was in the prompt. Image models are often prone to that sort of hallucination.
reply
carlob 12 hours ago
The 'long' code for checking apples is shorter, but it's missing the external for loop. So I guess you could say it's not (ahem) an apples to apples comparison.
reply
raphman 12 hours ago
I'm not OP but:

- missing ")" on the left side

- extra "}" on the right side

- the apples example on the right side ("Short code") ist significantly longer than the equivalent "Long code" example on the left side (which might also be because that code example omits the necessary for loop).

- The headings don't provide structure. "Checking Each Apple" and "Only Red Apples!" sounds like opposites, but the code does more or less the same in both cases.

reply
donkey_brains 12 hours ago
No “for” loop in the example purportedly showing an iterative approach.

Not mentioning the pain of debugging the streaming solution is also a little disingenuous.

reply
girvo 13 hours ago
Those are so funny that I was forgetting to breathe as I was laughing so hard, man that's excellent haha. Thanks for sharing them, even if we are cooked as a society...
reply
epiccoleman 9 hours ago
Minecraft JAVA --------> C++

that one gave me an actual lol.

reply
ChristianJacobs 17 hours ago
LinkedIn is a masquerade ball dressed up as a business oriented forum. Nobody is showing their true selves, everyone is either grinding at their latest unicorn potential with their LLM BFF or posting a "thoughtful" story that is 100% totally real about a life changing event that somehow turns into a sales pitch at the end...
reply
wiseowise 14 hours ago
LinkedIn is a fucking asylum populate by the most unhinged “people” and bots. I don’t know a single serious technical person active on LinkedIn.
reply
riskable 10 hours ago
There's a whole community devoted to pointing out LinkedIn Lunatics!

https://sh.itjust.works/c/linkedinlunatics

reply
DaiPlusPlus 13 hours ago
I short the stock of companies whose leadership is wasting time posting to LinkedIn instead of… y’know… leading their org. The more they post the more I short. Similarly, the less-attached-to-reality the post is the more I short.

I wish I could say I’m making bank off this strategy - but pretty-much all the slopposters (and the most insufferable of the AI boosters) are all working for nonpublic firms, oh well.

reply
snowwrestler 10 hours ago
Maybe not a winning strategy because a lot of public companies have a comms team that manages the CEO’s LinkedIn. Thereby saving the valuable time of the CEO themselves.
reply
ozim 17 hours ago
There are people who write genuinely interesting stuff there as well.

I use block option there quite a lot. That cleans up my experience rather well.

reply
wiseowise 14 hours ago
Can to share some of them? Genuinely curious.
reply
hliyan 14 hours ago
I don't know if I'm helping make things better or adding to the problem, but here's the sort of thing I share with my audience: https://www.linkedin.com/pulse/day-life-hft-developer-two-de...
reply
ozim 14 hours ago
I guess people one would follow on other platforms, plus bunch others posting in my native language.

Daniel Stenberg Jason Fried David Heinemeier Hansson Nick Chapsas Laurie Kirk Brian Krebs

reply
close04 10 hours ago
> LinkedIn is a masquerade ball dressed up as a business oriented forum. Nobody is showing their true selves

That's the main trait of almost all social media. A parade of falsity, putting on the show for everyone else, being what you wish you were and what everyone else dreams of being or envies.

LinkedIn is about boasting and boosting the professional life, other social media is for the personal life. More or less equally fake.

reply
benhurmarcel 14 hours ago
> these people clearly aren't reading anything they're posting

I'm surprised they are able to care so little. Somebody actually published this and didn't care enough to even skim through it.

reply
co_king_5 13 hours ago
Illiteracy is very, very common and exists at a range of different severities.

The people who got Cs in your English class are functionally illiterate.

reply
kshri24 14 hours ago
Yep! Quit LinkedIn when it went downhill. Has only gotten worse since then. Most social media is filled with AI slop. For someone who grew up in the 90s-2000s BBS/IRC era this sucks!
reply
varjag 15 hours ago
Of course they aren't. The text to go with those diagrams is also machine generated.
reply
layer8 15 hours ago
The comment you’re replying to already stated that.
reply
Andrex 8 hours ago
LinkedIn and GitHub, hmm. Wonder if there's a common thread...
reply
tveita 6 hours ago
And LinkedIn is Microsoft as well...

IMO Microsoft is right at the nexus of opportunity for solving some of the the large _problems_ that AI introduces.

Employers and job seekers both need a way to verify that they are talking to real identified people that are willing to put in some effort beyond spamming AI or wasting your time on AI run filters. LinkedIn could help them.

Programmers need access to real human-verified code and projects they can trust, not low-effort slop that could be backdoored at any moment by people with unclear motives and provenance. Github could help.

etc. etc. for Office, Outlook ...

But instead they've decided to ride the slop waves, throw QA to the wind, and call every bird and stone "copilot".

reply
sshagent 14 hours ago
totally. I'm really getting behind the slight replacement of TL;DR to AI;DR If you can't be bothered to read your own AI slop, then I'm not reading it either.
reply
alex_suzuki 5 hours ago
For context:

> At Microsoft, we're working to add articles to Microsoft Learn that contain AI-generated content. Over time, more articles will feature AI-generated text and code samples.

From: https://learn.microsoft.com/en-us/principles-for-ai-generate...

<vomit emoji here>

reply
Mixtape 5 hours ago
Great. As if Learn articles weren't already a mess to begin with.

A few weeks ago, I needed some syntax information to help with building out a PowerShell script. The input and output parameter sections each included "{{ Fill in the Description }}"[1] in lieu of any meaningful content. There wasn't even a link to the data type's description elsewhere in the Learn database. I was ultimately able to get done what I needed to do, but it really irked me that whoever developed the article would publish it with such a glaring omission.

[1] https://learn.microsoft.com/en-us/powershell/module/microsof...

reply
andai 10 hours ago
https://en.wiktionary.org/wiki/morg

Morg doesn't seem to be a word in English (though it is in Irish!), but it sounds like it should be.

This is one aspect of AI I will miss, if we ever figure out how to make it go away. The delightful chaos. It invented a word here, without even meaning to.

For example, I vibe coded a QWOP clone the other day, and instead of working human legs, it gave me helicopter legs. You can't walk, but if you mash the keyboard, your legs function as a helicopter and you can fly through the sky.

That obviously wasn't intentional! But it was wonderful. I fear that in a few years, AI will be good enough to give me legs that don't fly like a helicopter. I think we will have lost something special at that point.

When I program manually, I am very good at programming bugs. If I'm trying to make something reliable, that's terrible. But if I'm trying to make a computer do something nobody even realized it can do... making it do things you weren't expecting is the only reliable way to do that.

So I've been working on a way to reintroduce bugs mechanically, by mutating the AST. The fundamental idea is sound -- most of my bugs come from "stuff I obviously meant to type, but didn't" -- but it needs a bit more work. Right now it just produces nonsense even I wouldn't come up with :)

I currently have "mess up the file". The next 2 phases would be "in a way so that it still compiles", and "in a way so that it doesn't (immediately) crash at runtime", (since the whole point is "it still runs, but it does something weird!"). More research needed :)

reply
hansmayer 10 hours ago
Nothing delightful of funny here mate. Imagine being a junior just starting out reading that shitty piece of content, trying to make sense of it all?
reply
LoganDark 9 hours ago
> Morg doesn't seem to be a word in English (though it is in Irish!), but it sounds like it should be.

Maybe because English also has 'morgue'.

reply
Forgeties79 10 hours ago
I appreciate humorous outcomes but not when I’m trying to solve concrete task. I’m sure an LLM that is designed to introduce a little chaos is not hard to make. All I know is I won’t miss the weird and incorrect output if they ever get more consistent.
reply
andai 10 hours ago
The real question is can we get consistency without mode collapse? Are they orthogonal, or necessarily opposed?

Or to put it more bluntly.. can we get correctness without cringe ;)

I think it could be done, to a degree, with current systems, but it would be more expensive. You'd increase the temperature, and then you'd do more runs. And you could do that iteratively... re-generate each paragraph a few times, take the best of N. So you end up with interesting output, which still meets some threshold of quality.

Actually that doesn't sound too hard to slap together right now...

reply
munificent 9 hours ago
> What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own.

We should start calling this "copyright laundering".

reply
adzm 17 hours ago
> Till next 'tim'

It took me a few times to see the morged version actually says tiന്ന

reply
zahlman 16 hours ago
For the curious:

  $ python -c 'print(list(map(__import__("unicodedata").name, "ന്ന")))'
  ['MALAYALAM LETTER NA', 'MALAYALAM SIGN VIRAMA', 'MALAYALAM LETTER NA']
(The "pypyp" package, by Python core dev and mypy maintainer Shantanu Jain, makes this easier:)

  $ pyp 'map(unicodedata.name, "ന്ന")'
  MALAYALAM LETTER NA
  MALAYALAM SIGN VIRAMA
  MALAYALAM LETTER NA
reply
AshleysBrain 15 hours ago
Is this not a good example of how generative AI does copyright laundering? Suppose the image was AI generated and it did a bad copy of the source image that was in the training data, which seems likely with such a widely disseminated image. When using generative AI to produce anything else, how do you know it's not just doing a bad quality copy-paste of someone else's work? Are you going to scour the internet for the source? Will the AI tell you? What if code generation is copy-pasting GPL-licensed code in to your proprietary codebase? The likelihood of this, the lack of a way to easily know it's happening, and the risks it causes, seems to me to be being overlooked amidst all the AI hype. And generative AI is a lot less impressive if it often works as a bad quality copy paste tool rather than the galaxy brain intelligence some like to portray it as.
reply
Gigachad 15 hours ago
There are countless examples. Often I think about the fact that the google search AI is just rewording news articles from the search results, when you look at the source articles they have exactly the same points as the AI answers.

So these services depends on journalists to continuously feed them articles, while stealing all of the viewers by automatically copying every article.

reply
nicbou 13 hours ago
Yes, and it's slowly killing those websites. Mine is among them and the loss in traffic is around 60%.
reply
Andrex 8 hours ago
Snippets were already getting Google in legal hot water (with Yelp in the US and news agencies in Australia in particular IIRC) long before LLMs and AI scraping. It's a debatable gray area of Fair Use growing out of early rulings on DMCA related cases, and also Google's win over the Author's Guild at SCOTUS.
reply
jll29 14 hours ago
Of course Google has a history of copying articles in whole (cf. Google Cache, eventually abandoned).
reply
AlienRobot 15 hours ago
I actually often have the opposite problem. The AI overview will assert something and give me dozens of links, and then I'm forced to check them one by one to try to figure out where the assertion came from, and, in some cases, none of the articles even say what the AI overview claimed they said.

I honestly don't get it. All I want is for it to quote verbatim and link to the source. This isn't hard, and there is no way the engineers at Google don't know how to write a thesis with citations. How did things end up this way?

reply
jll29 14 hours ago
ChatGPT was a research prototype thrown at end users as a "product".

It is not a carefully designed product; ask yourself "What is it FOR?".

But the identification of reliable sources isn't as easy as you may think, either. A chat-based interaction really makes most sense if you can rely on every answer, otherwise the user is misled and user and conversation may go in a wrong direction. The previous search paradigm ("ten snippets + links") did not project the confidence that turns out is not grounded in truth that the chat paradigm does.

reply
zephen 2 hours ago
I have to say, I suffer from both problems, just not simultaneously.

Depending on what I am searching for, and how important it is to me to verify the accuracy and provenance of the result, I might stop at the AI, or might find, as you have, that there is no there there.

But, no matter what, the AI is essentially reducing the ability of primary sources to monetize their work. In the case where the search stops at the AI, obviously no traffic (except for incessant LLM polling) goes to the primary source.

And in the case you describe, identical traffic (your search) is routed to multiple sources, so if one of them actually was the source of something you were interested in, they effectively wind up sharing revenue with other sources, because the value of every one of your clicks is reduced by how often you click.

reply
ezst 13 hours ago
> What if code generation is copy-pasting GPL-licensed code in to your proprietary codebase?

This is obviously a big, unanswered, issue. It's pretty clear to me that we are collectively incentivised to pollute the well, and that it happens for long-enough for everything to become "compromised". That's essentially abandoning opensource and IP licensing at large, taking us to an unchartered era where intellectual works become the protected property of nobody.

I see chatbots having less an impact on our societies than the above, and interestingly it has little to do with technology.

reply
zephen 59 minutes ago
> we are collectively incentivised to pollute the well

Honestly, there are two diametrically opposed incentives occurring right now. The one you describe may not even be paramount -- how hard is it to prove infringement, shepherd a case through court, and win a token amount. Is it worthwhile just to enrich a few lawyers, and get more AI-regurgitated slop to open up?

The second incentive is to not publish source code that might be vacuumed up by a completely amoral automaton. We may be seeing the second golden age of proprietary software.

reply
UqWBcuFx6NV4r 15 hours ago
If you actually care about having that sort of discussion I’d suggest a framing that doesn’t paint anyone that doesn’t agree with you as succumbing to AI hype and believing it has “galaxy brain intelligence”. Please ditch this false dichotomy. At this point, in 2026, it’s tiring.
reply
Brian_K_White 17 hours ago
Please let morged become a thing.
reply
rossant 15 hours ago
A mix between merged, morphed, and morgue. I love it. Should be nominated as word of 2026.
reply
Brian_K_White 9 hours ago
I just like that it would mean there would be an entry right in the dictionary that links to the whole story for everyone to be reminded of for all of time.
reply
jjgreen 13 hours ago
Morge it! Morge it and let them flee like the dogs they are!
reply
ares623 17 hours ago
Satya yelled "it's morgin' time" and then morged all over the place.
reply
zephen 16 hours ago
If you've got the tiന്ന, we've got the morge.
reply
xxr 16 hours ago
When I read the title, I thought "morg" was one of those goofy tech words that I had missed but whose meaning was still pretty clear in context (like a portmanteau of "Microsoft" and "borged," the latter of which I've never heard as a verb but still works). I guess it's a goofy tech word now.
reply
nubinetwork 13 hours ago
At least it wasn't mogged, or morbed...
reply
rambambram 14 hours ago
Microsoft is only doing something about this now because there's enough evidence for a lawsuit. I don't know about the US, but the author seems to be from The Netherlands. Correct me if I'm wrong (and I don't know the exact legal name for it now), but there's something like a right to not get 'distortion or mutilation of intellectual property'.

Microsoft just spits in this creator's face by mutilating his creation in a bad way.

reply
nicbou 14 hours ago
> Is there even a goal here beyond "generating content"?

This is the part that hurts. It's all so pointless, so perfunctory. A web of incentives run amok. Systems too slick to stop moving. Is this what living inside the paperclip maximizer feels like?

Words we didn't write, thoughts we didn't have, for engagement, for a media presence, for an audience you can peddle yourself to when your bullshit job gets automated. All of that technology, all those resources, and we use it to drown humanity in noise.

reply
hansmayer 10 hours ago
Well Bill Gates declared it 20 years ago : "Content is king", so here we are. "Content" everywhere, and not just the Internet - go to anny tourist hotspot, there is a ratio of 5:1 of idiot influencers all stretching their faces in front of the same landmarks. Your utility company now produces "content" for some reason. Every lazy Tom, Dick and Joe produce "content" instead of doing their trade. Apps that summarise your books so you can "read" them on your commute. To the ghouls who are driving the "content" economy, everything is "content". They don't really understand either long form or novels, or music or anything creative. To them it's all just content, to be bought, sold and consumed by the pound.
reply
flir 12 hours ago
Acceleranco is looking rather prescient right now.
reply
nicbou 11 hours ago
Fall or Dodge in Hell too
reply
floating-io 13 hours ago
No need to worry, that's what the B Ark is for.
reply
aftergibson 16 hours ago
Archive.org shows this went live last September: https://web.archive.org/web/20250108142456/https://learn.mic...

It took ~5 months for anyone to notice and fix something that is obviously wrong at a glance.

How many people saw that page, skimmed it, and thought “good enough”? That feels like a pretty honest reflection of the state of knowledge work right now. Everyone is running at a velocity where quality, craft and care are optional luxuries. Authors don’t have time to write properly, reviewers don’t have time to review properly, and readers don’t have time to read properly.

So we end up shipping documentation that nobody really reads and nobody really owns. The process says “published”, so it’s done.

AI didn’t create this, it just dramatically lowers the cost of producing text and images that look plausible enough to pass a quick skim. If anything it makes the underlying problem worse: more content, less attention, less understanding.

It was already possible to cargo-cult GitFlow by copying the diagram without reading the context. Now we’re cargo-culting diagrams that were generated without understanding in the first place.

If the reality is that we’re too busy to write, review, or read properly, what is the actual function of this documentation beyond being checkbox output?

reply
raphman 12 hours ago
Huh, I thought that the MS tutorial was older. The blurry screenshot in it is from 2023.

And there ist another website with the same content (including the sloppy diagram). I had assumed that they just plagiarized the MS tutorials. Maybe the vendor who did the MS tutorial just plagiarized (or re-published) this one?:

https://techhub.saworks.io/docs/intermediate-github-tutorial...

reply
LauraMedia 16 hours ago
You are assuming: A) That everyone who saw this would go as far as post publicly about it (and not just chuckle / send it their peers privately) and B) Any post about this would reach you/HN and not potentially be lost in the sea of new content.
reply
anonymous908213 16 hours ago
> readers don’t have time to read properly

> So we end up shipping documentation that nobody really reads

I'd note that the documentation may have been read and noticed as flawed, but some random person noticing that it's flawed is just going to sigh, shake their heads, and move on. I've certainly been frustrated by inadequate documentation before (that describes the majority of all documentation, in my experience), but I don't make a point of raising a fuss about it because I'm busy trying to figure out how to actually accomplish the goal for which I was reading documentation for rather than stopping what I'm doing to make a complaint about how bad the documentation is.

This says nothing to absolve everyone involved in publishing it, of course. The craft of software engineering is indeed in a very sorry state, and this offers just one tiny glimpse into the flimsiness of the house of cards.

reply
LauraMedia 16 hours ago
I usually would post it in our dev slack chat and rant for a message or two how many hours were lost "reverse-engineering" bad documentation. But I probably wouldn't post about it on here/BlueSky.
reply
mns 15 hours ago
If you work in a medium to large company, you know most of the documentation is there for compliance reasons or for showing others that you did something at one point. You can probably just put slop at the end of documents, while you still keep headlines relevant and no one will ever read it or notice it.
reply
crossroadsguy 16 hours ago
Something tangential..

> people started tagging me on Bluesky and Hacker News

Never knew tagging was a thing on Hacker News. Is it a special feature for crème de crème users?

reply
OJFord 15 hours ago
Don't think so, expect they just mean replying to comments to mention it, or they posted another article and people commented about seeing this and isn't it from another article of yours etc.
reply
viraptor 15 hours ago
People still try to use @user all the time, even though it doesn't work.
reply
jacquesm 14 hours ago
I wonder how far the balance will have to tip before the general public realizes the danger. Humanity's combined culture, for better or worse, up to 2021 or so was captured in a very large but still ultimately finite stream of bits. And now we're diluting those bits at an ever greater speed. The tipping point where there are more generated than handcrafted bits is rapidly approaching and obviously it won't stop there. A few more years and the genuine article is going to be a rarity.
reply
chromehearts 17 hours ago
Billions must morge
reply
reddalo 15 hours ago
Developors, developors, developors, developors!
reply
ezst 17 hours ago
Waiting for the LLM evangelists to tell us that their box of weights of choice did that on purpose to create engagement as a sentient entity understanding the nature of tech marketing, or that OP should try again with quatuor 4.9-extended (that really ships AGI with the $5k monthly subscription addon) because it refactored their pet project last week into a compilable state, after only boiling 3 oceans.
reply
meibo 17 hours ago
Glorp 5.3 Fast Thinking actually steals this diagram correctly for me locally so I think everyone here is wrong
reply
Balinares 16 hours ago
I may have a new favorite HN comment.
reply
Longwelwind 15 hours ago
Using an LLM to generate an image of a diagram is not a good idea, but you can get really good results if you ask it to generate a diagram.io SVG (or a Miro diagram through their MCP).

I sometimes ask Claude to read some code and generate a process diagram of it, and it works surprisingly well!

reply
shaky-carrousel 15 hours ago
You're holding the LLM wrong.
reply
nicbou 13 hours ago
It's about as easy to hold as an old foam mattress
reply
nolok 15 hours ago
It's microsoft's AI though, not even the totally crazed evangelists like that one.
reply
bob1029 15 hours ago
This is why we don't use diffusion style models for diagrams or anything containing detailed typography.

An LLM driving mermaid with text tokens will produce infinitely more accurate diagrams than something operating in raster space.

A lot of the hate being generated seems due to really poor application of the technology. Not evil intent or incapable technology. Bad engineering. Not understanding when to use png vs jpeg. That kind of thing.

reply
floating-io 13 hours ago
I think the hate is more about the fact that nobody gave enough of a damn to catch the error before it went public...
reply
QuiCasseRien 11 hours ago
This is the type of diagram that horrified me.

It's a very very hard and time consuming task for dev to maintain hotfix for previous releases !

Yeah, easier for users, they don't have to care about breaking changes or migration guide. They just blindly update to the nearest minor.

But as the time goes on, the code for dev ends up being a complete mess of git branches and backports. Dev finally forgot some patches and the software contains a major security hole.

Dev ends by being exhausted, frustrated by its project and roasted by its users.

=> What I do : do not maintain any previous release but provide a strong migration guide and list all breaking changes !

users just have to follow updates or use another software.

I'm happy with it, my project has no debt code and more clean code.

reply
ryukoposting 8 hours ago
Yes, the original git-flow post aggregates several widely-held intuitions about git, and then shoves a bunch of fluff in the middle to make it look more appealing to middle management.

I have seen firsthand how the original git-flow post convinced management to move off SVN. In that regard, it's an extremely important work. I've also never seen git-flow implemented exactly as described.

...and frankly, there are better ways to use git anyway. The best git workflows I've seen, at small scale and large, have always been rebase-only.

reply
zahlman 16 hours ago
I'm glad I actually checked TFA before asking here if "morging" referred to some actual technical concept I hadn't previously heard of.
reply
ccozan 16 hours ago
If we are here, lets at least coin it for something relevant!
reply
bulbar 15 hours ago
Good example of the fact that LLMs, as its core, are lossy compression algorithm that are able to fill in the gaps very cleverly.
reply
b00ty4breakfast 4 hours ago
we're being served the fecal sculptures that have been reconstituted from our own sewerage, culture as we know it is basically at the end of it's life. Everything must be mechanized, human involvement, and thus human agency, at any step is seen as abhorrent in the industrial milieu.
reply
duxup 7 hours ago
There's a lot of "bad look" things that aren't a big deal.

But man this one indicates such a horrible look / lack of effort (like none) from Microsoft.

Not that Microsoft is short on bad looks, but this really seems like one of those painfully symbolic ones.

reply
asddubs 14 hours ago
reply
etyhhgfff 5 hours ago
> This isn't a case ... It's the opposite of that. It's taking something that ...

"Its not this its that" is the new em-dash.

reply
imglorp 8 hours ago
I find it interesting that current AI, as stellar as it is for language and even looking at writing in images, falls over hard when generating writing in images.
reply
1970-01-01 8 hours ago
Never forget V is for carrot. https://i.redd.it/gj6tf34vkzcg1.png
reply
AndroTux 17 hours ago
“It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy.”

Seems to be perfectly on brand for Microsoft, I don’t see the issue.

reply
blibble 16 hours ago
LLM infested crap, directly pushed to customers without any pushback

so standard Microslop

reply
eurekin 14 hours ago
It's funny how big of an impact individual developers can have with such seemingly simple publications. At the time of the article with that diagram release, I was changing jobs and I distinctly remember, that the diagram was extensively discussed and compared to company standards, at both the old and the new place.
reply
alex_suzuki 17 hours ago
From TFA:

> the diagram was both well-known enough and obviously AI-slop-y enough that it was easy to spot as plagiarism. But we all know there will just be more and more content like this that isn't so well-known or soon will get mutated or disguised in more advanced ways that this plagiarism no longer will be recognizable as such.

Most content will be less known and the ensloppified version more obfuscated... the author is lucky to have such an obvious association. Curious to see if MSFT will react in any meaningful way to this.

Edit: typo

reply
Ylpertnodi 17 hours ago
> Most content will be less known and the enslopified version more obfuscated...

Please everyone: spell 'enslopified', with two 'p's - ensloppiified.

Signed, Minority Report Pedant

reply
tkocmathla 17 hours ago
And 3 'i's?
reply
KronisLV 15 hours ago
> take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own.

I don’t even care about AI or not here. That’s like copying someone’s work, badly, and either not understanding or not giving a shit that it’s wrong? I’m not sure which of those two is worse.

reply
bayindirh 17 hours ago
Sorry but, isn't this textbook Microsoft? Aside being more blatant, careless and on the nose; what's different than past Microsoft?

These people distilled the knowledge of AppGet's developer to create the same thing from scratch and "Thank(!)" him for being that naive.

Edit: Yes, after experiencing Microsoft for 20+ odd years, I don't trust them.

reply
neonihil 10 hours ago
> The AI rip-off was not just ugly. It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy.

I'd argue that this statement is perfectly true when the word "unworthy" is removed.

reply
My_Name 12 hours ago
Of course, if you use AI, it's very hard to know the sources used for training that went into the output you just got.
reply
zkmon 17 hours ago
That old beatiful git branching model got printed into the minds of many. Any other visual is not going to replace it. The flood of 'plastic' incarnations of everything is abominable. Escape to jungles!!
reply
noufalibrahim 16 hours ago
Indeed. I don't remember all the details of the flow but the aesthetics of the diagram are still stuck in my head.
reply
nashashmi 10 hours ago
I remember doing these kinds of knock offs of diagrams all the time in elementary school and middle school. I wonder if I did this when I was a kid, would the author feel just as triggered?

At some point, AI transformations of our work is just good enough but not excellent enough. And that is where the creators’ value lies.

reply
jron 16 hours ago
Morged > Oneshotted
reply
kshri24 16 hours ago
I can already tell this is probably some AI Microslop fuck up without even clicking on the article.

EDIT: Worse than I thought! Who in their right mind uses AI to generate technical diagrams? SMDH!

reply
beeflet 16 hours ago
Developer BRUTALLY FRAME-MORGED by Microsoft AI
reply
kgeist 16 hours ago
>What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own.

"Don't attribute to malice what can be adequately explained by stupidity". I bet someone just typed into ChatGPT/Copilot, "generate a Git flow diagram," and it searched the web, found your image, and decided to recreate it by using as a reference (there's probably something in the reasoning traces like, "I found a relevant image, but the user specifically asked me to generate one, so I'll create my own version now.") The person creating the documentation didn't bother to check...

Or maybe the image was already in the weights.

reply
kuhaku22 14 hours ago
In this case, we can chalk it up to malicious stupidity. Someone posting a reference aimed at learners, especially with Microsoft's reach and name recognition, has a responsibility to check the quality and accuracy of the materials. Using an AI tool doesn't absolve that responsibility one bit.
reply
usefulposter 17 hours ago
Hey, it's just like the Gas Town diagrams.

https://news.ycombinator.com/item?id=46746045

reply
dude250711 14 hours ago
I only now understand it's the other kind of 'gas'.
reply
4ggr0 8 hours ago
> It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy.

well, what should i say...

reply
ifh-hn 12 hours ago
I had to look up what "morged" meant, it's either morph and merge or a youtuber. I'm going with the first.

I can't find a link to the learn page so can only see what's on the article. Is this a real big deal? Genuine question, driveby downvote if you must.

Even if this was a product of AI surely it's just a case of fessing up and citing the source? Yeah it doesn't look good for MS but it's hardly the end of the world considering how much shit AI has ripped off... I might be missing something.

reply
jabron 11 hours ago
It's indicative of how little Microsoft cares in addition to the issue of plagiarism. Which shouldn't be a surprise to anyone who's had to read Microsoft documentation recently.
reply
emmanuel_1234 11 hours ago
See https://archive.is/twft6 for context. The diagram is a bit lower on the page.
reply
dotdi 17 hours ago
I guess this image generation feature should never have been continvoucly morged back into their slop machine
reply
isoprophlex 16 hours ago
> The AI rip-off was not just ugly. It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy.

lmao where has the author been?! this has been the quintessential Microsoft experience since windows 7, or maybe even XP...

reply
zombot 14 hours ago
Continvoucly! Well morged.
reply
ftchd 14 hours ago
is this the HN version of "mogged my frame"?
reply
bitwize 17 hours ago
I love it when the LLM said "it's morgin' time" and proceeded to morg all over the place.
reply
ares623 17 hours ago
One step closer to the Redditification of HN. And it is entirely because the content out there nowadays.
reply
nxobject 16 hours ago
Ha, I think a user since 2007’s earned the right to do that once in a while.
reply
debugnik 17 hours ago
Maybe you're missing the reference to the Morbius movie joke, which sounds surprisingly fitting. It's not like older HNers never made funny references.

Edit: Apparently you didn't.

reply
theodric 16 hours ago
HN is a Serious Place. We're here to make money. Please leave your jokes at home.
reply
jacquesm 14 hours ago
Slight correction, to pretend to make money.
reply
bitwize 16 hours ago
The commenter you're responding to a) independently made the exact same reference; b) has a username like that of Jared Leto's other Disney tentpole flop role...
reply
debugnik 16 hours ago
Well spotted, I guess they're pushing for HN's redditification then.
reply
whirlwin 17 hours ago
The new Head of Quality in Microsoft has not started working there yet, so it's business as usual at MS... And now with AI slop on top

Ref: https://www.reddit.com/r/technology/comments/1r1tphx/microso...

reply
misiek08 14 hours ago
So they will get better at publicly dismantling such cases and doing much better damage control in PR only. "Q" in "Microsoft" stands for "quality".
reply
bschwindHN 15 hours ago
> The AI rip-off was not just ugly. It was careless, blatantly amateuristic, and lacking any ambition, to put it gently.

That pretty much describes Microsoft and all they do. Money can't buy taste.

He was right:

https://www.youtube.com/watch?v=3KdlJlHAAbQ

reply
shaky-carrousel 15 hours ago
> The AI rip-off was not just ugly. It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy.

LOL, I disagree. It's very on brand for Microslop.

reply
WesolyKubeczek 16 hours ago
I propose to adopt the word „morge”, a verb meaning „use an LLM to generate content that badly but recognizably plagiarizes some other known/famous work”.

A noun describing such piece of slop could be „morgery”.

reply
nvader 16 hours ago
I read through all the proposals in this discussion and I like yours the best out of them.

Seconded!

reply
larodi 16 hours ago
Everything you publish now on will be stolen and reused one way or another.
reply
zephen 17 hours ago
On the one hand, I feel for people who have their creations ripped off.

On the other hand, it makes sense for Microsoft to rip this off, as part of the continuing enshittification of, well, everything.

Having been subjected to GitFlow at a previous employer, after having already done git for years and version control for decades, I can say that GitFlow is... not good.

And, I'm not the only one who feels this way.

https://news.ycombinator.com/item?id=9744059

reply
ali-aljufairi 15 hours ago
[dead]
reply
poojagill 17 hours ago
[dead]
reply
dude250711 14 hours ago
Allow me to help you: your CEO mandated the use of "AI".
reply
VerifiedReports 17 hours ago
[flagged]
reply
marssaxman 17 hours ago
It seems to me rather less likely that someone at Microsoft knowingly and deliberately took his specific diagram and "ran it through an AI image generator" than that someone asked an AI image generator to produce a diagram with a similar concept, and it responded with a chunk of mostly-memorized data, which the operator believed to be a novel creation. How many such diagrams were there likely to have been, in the training set? Is overfitting really so unlikely?

The author of the Microsoft article most likely failed to credit or link back to his original diagram because they had no idea it existed.

reply
jacquesm 14 hours ago
How you commit plagiarism is less important than the fact that you commit plagiarism.
reply
marssaxman 8 hours ago
What difference does that make in solving the actual problem? The real story here is not "some lousy Microsoft employee ripped off this guy's graphic", but "people using AI image generators may receive near-copies of existing media instead of new content, with no indication that this has happened".

If this has been discovered once, it must be happening every day. What can we do about that? Perhaps image generators need to build in something like a Tineye search to validate the novelty of their output before returning it.

reply
zahlman 16 hours ago
Yes, but from OP's perspective this is a distinction without a difference.
reply
marssaxman 8 hours ago
Clearly, but OP would be well advised to apply Hanlon's razor. The victimhood narrative does not improve understanding, which is necessary to work for better outcomes.
reply
pwndByDeath 17 hours ago
reply
aobdev 17 hours ago
Check the article, AI interpreted the phrase “continuously merged” as “continvoucly morged”
reply
tra3 17 hours ago
I too was confused until I looked at the included screenshot.

This is just another reminder that powerful global entities are composed of lazy, bored individuals. It’s a wonder we get anything done.

reply
locusofself 17 hours ago
we are also stressed, scared for our jobs and bombarded by constant distraction
reply
ChristianJacobs 17 hours ago
You apparently did not read the article. "Morged" is a word the LLM that ripped off the article author's diagram hallucinated.
reply
zahlman 16 hours ago
[flagged]
reply
amdivia 16 hours ago
I'm failing to understand the criticism here

Is it about the haphazardous deployment of AI generated content without revising/proof reading the output?

Or is it about using some graphs without attributing their authors?

if it's the latter (even if partially) then I have to disagree with that angle. A very widespread model isn't owned by anyone surely, I don't have to reference newton everytime I write an article on gravity no? but maybe I'm misunderstanding the angle the author is coming from

(Sidenote: if it was meant in a lightheaded way then I can see it making sense)

reply
sixeyes 16 hours ago
did you read the article? this is explicitly explained! at length!

not at all about the reuse. it's been done over and over with this diagram. it's about the careless copying that destroyed the quality. nothing was wrong with the original diagram! why run it through the AI at all?

reply
matthewmacleod 16 hours ago
Other than that, I find this whole thing mostly very saddening. Not because some company used my diagram. As I said, it's been everywhere for 15 years and I've always been fine with that. What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own. This isn't a case of being inspired by something and building on it. It's the opposite of that. It's taking something that worked and making it worse. Is there even a goal here beyond "generating content"?

I mean come on – the point literally could not be more clearly expressed.

reply
yokoprime 16 hours ago
A somewhat contrarian perspective is that this diagram is so simple and widely used and has been reproduced (ie redrawn) so many times that is very easy to assume this does not have a single origin and that its public domain.
reply
zahlman 16 hours ago
That's pretty hard to reconcile with OP's claim:

> In 2010, I wrote A successful Git branching model and created a diagram to go with it. I designed that diagram in Apple Keynote, at the time obsessing over the colors, the curves, and the layout until it clearly communicated how branches relate to each other over time. I also published the source file so others could build on it.

If you mean that the Microsoft publisher shouldn't be faulted for assuming it would be okay to reproduce the diagram... then said publisher should have actually reproduced the diagram instead of morging it.

reply
blibble 16 hours ago
it's not public domain, it's copyrighted

what's the bet that the intention here was explicitly to attempt to strip the copyright

so it could be shoved on the corporate website without paying anyone

(the only actual real use of LLMs)

reply
jacquesm 14 hours ago
That's what's so disgusting here: it wasn't even about payment, it was about not having to attribute it to who created it. That's too much of a payment for MS, so they just take your stuff, run it through their white washing machine and call it a day.

See also: Copilot.

reply