We've been including product tips in PRs created by Copilot coding agent. The goal was to help developers learn new ways to use the agent in their workflow. But hearing the feedback here, and on reflection, this was the wrong judgement call. We won't do something like this again.
It's appreciated, but these weren't tips, these were ads. Tips are "Save time with keyboard shortcuts" or "Check out the latest features under 'Whats New' in the help menu!" When you name other products, that's an ad.
It's an ad for using CoPilot and for Raycast.
> But raycast said they didn't know about it.
If I buy a billboard that tells people to go eat at a nearby restaurant, that's ad regardless of whether or not the restaurant knows that I bought that ad.
> To me the explanation makes perfect sense. "You can use this tool with raycast" seems like a very reasonable tip.
Raycast is a paid product. Even though they have a free tier, they only have that to get people to use and like the tool enough to pay for it. They want you to use Raycast so you use CoPilot and pay for it. It's an ad.
My short search really didn't bring up any definition that included the need of the product/service owner knowning that the advertising is happening.
And the message very much qualifies as trying to bring people to buy raycast (or at minimum to use it which usually want people to also pay later on).
No one, anywhere, ever wants this or anything like it. Do not inject anything that is outside of the context of the session, ever.
This is how you get your software banned at large companies.
Question for you, did anyone on the team really not push back? Does the team really think anyone wants ads in their copilot output? If the answer to both of these is no, you have a team full of yes men, not actual developers.
This is the real question. If they are serious about not doing something like this again, they NEED to look at what process failed and let something like this get proposed, designed, implemented and pushed to production. Usually things get reviewed at each stage. Did the people who pushed back on this get steam rolled? If no one pushed back, that's an even serious culture question and the entire org would need training.
A serious "we won't do it again", needs to be accompanied by a COE on this for identifying what went wrong, and identifying what guardrails can be put in place and then actually implementing them.
That's a tough one. In the big meeting? In the small meeting? "Officially" push back? Encouraged to make the push back unofficial? Etc. Even just internally, it can be hard to quantify. From internal > external, more so.
The number of times I’ve had to defend someone else’s customers let alone my own is exhausting.
And that dynamic is only allowed within close circles.
I’ve found once “the decision” is made, the bigger the subsequent meeting, protests are often swept under the rug.
On most occasions the worst part is that folks intentionally withhold information to get their way. And thats real hard to compete against without making an ass out of yourself, or losing the trust of others.
This is why core principals matter so much.
Microsoft has been pulling user hostile crap for decades, so either "we" or "like this" (or both) is probably not super accurate. ;)
I believe they were being sincere but reality is often more complicated than 1 persons statement.
Over on twitter, someone from MS said that Copilot can modify PRs simply because they were mentioned?
I've been using GitHub since it was new and heavily rely on coding agents for development, but that's an insanely large security hole. There's clearly confusion about what copilot is and is not able to edit elsewhere in this thread.
I'm backing up old repos now, and am no longer trusting your service as an archive. I'm wondering if the world needs to fork things like npm and vs code to save itself from the supply chain attacks these sort of product management decisions will enable.
I already moved active development elsewhere when you dropped below three nines back in 2024-2025.
My employer pushes copilot quite hard and I’ve never seen copilot do anything without me telling it to act in some way.
If the PR is wholly authored by Copilot I get the spirit of this, although maybe not the best implementation. And "tips" like this that look like an ad for a product _definitely_ feel like an enshittification betrayal of the user, even if it was a genuine recommendation and not a paid advertisement.
In the OP's situation, where where Copilot was summoned to fix some thing within a human-authored PR, irrelevant modification of the PR description to insert unrelated content is specifically egregious. Copilot can easily include the tip in its own comment, so I'm curious why it was decided to edit the description of a PR instead.
Imagine what Microsoft's lawyers would do to me if I made a billboard "<my random product> is awesome, use it -- Satya Nadella" and started sticking it all over the city.
I don't see any effort to remediate it. Have you informed people whose names you used to post the ads and offered them to remove the ads?
(Now imagine this edited into the post you just made for a more-apt comparison)
If you do work at MS, I cannot believe any person involved legit thought it was "just a tip and nobody will mind their posts being edited to include product recommendations". I don't know what other parts of your comment are honest if the core statement is false
This has just as much value as when an LLM claims it won't make a certain mistake again, and for exactly the same reason.
You should gather together your team and look through the responses to this thread together. There are a lot of emotions in these comments, but it could be a very constructive experience if you're able to put that aside. I'm sure you're aware that customer-sentiment toward Github has been poor lately, but these commenters are your customers. I believe Github has the potential to win back loyalty, but it will require a deeper understanding of your customer segment.
Microsoft owns GitHub where many of these ethical violations are easily found and were perpetrated.
I speculate the cultural safety around that monopoly-power for corporate-benefit behavior could still be present and accepted for negotiations between MS and acquisition targets.
I also note that ”for PRs” - will we see these appearing as comments in generated code?
Sureeeeee
I see that you're a product manager at GitHub. Can you explain why you thought this feature was value-added?
It's only semi-related in that it's a similar string thats appearing in millions of repos due to a Github feature change, but it's now polluting Google search results with tons of duplicate URLs unnecessarily. Issue has 100+ votes but has been entirely ignored by Github team.
I appreciate the rest of your reply, but it would be generous to say you're stretching the truth here. Yes, the official MS statement is that these are "tips", but you, I, and everyone else here knows what this is.
See, what I expect is that you or someone on your team will move on internally, and then all promises made will be not just forgotten, but tossed aside with relief. Because this is The Way within MS now. All projects are just fodder for your CV, and when you get that paybump/position you want some other completely unscrupulous actor will join and implement the same. exact. thing.
Edit: Wow this is a shitshow. It's almost like you dumb fuckers have burned up ALL THE GOODWILL YOU HAD LEFT.
A verifiable claim! I put it at 75% you totally will, but if any manifolders think I’m full of it it should converge to something less cynical
https://manifold.markets/HastingsGreer/will-microsoft-copilo...
Once you put a deadline on it. As stated I don’t think it is.
You may not feel you owe $BigCoEmployee better (though chances are, said person is just as much a community member here as you and the other users slamming them are), but you owe this community better if you're participating in it.
As the dozens of other comments show, the overwhelming majority of us do not believe the root commentors claims, and this PM quite objectively does not have the leverage and authority to back their claim that they won’t let this happen again.
It’s hard not to read your conception of “trying for something different” as granting undue credulity to a transparently dishonest corporate actor.
The impulse to hit back against what is perceived as a "transparently dishonest corporate actor" is natural and human. I feel it also, and in fact my first response when I read such comments is always an adrenaline surge and the peculiar pleasure-hit of righteous indignation. So yes, I know where these feelings are coming from; we all do.
The problem is that in the HN context, (1) there is a human being at the other end of the account being attacked, and (2) there are orders of magnitude more attackers. In practice, this can easily turn into a mob dynamic and in fact a mass beating, if a virtual one. That's bad in its own right and bad for the community here.
Edit - past explanations in case relevant:
https://news.ycombinator.com/item?id=28821698
https://news.ycombinator.com/item?id=28647036
more at https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Honest question: If we agree that the transparent dishonesty and the lynch mob behavior are both undesirable, how do you think the two should be balanced in operative terms?
I don’t want to put words in your mouth — but are you saying you won’t allow direct pushback to dishonest corporate actors??
My view is that healthy discourse requires balance and proportionality: flagrant dishonesty, as is the case here, should license a proportional degree of pushback.
I don’t agree at all that “nobody believes this” is quite the personal attack you’re making it out to be, but I don’t care to debate that at length either.
(1) the long-term health of the community has to be the priority here. Otherwise it won't survive—all the default internet vectors point the other way;
(2) it's possible to push back, express skepticism, etc., in way that respects the person on the other side of the conversation and isn't just venting the impulse to shame the other.
You guys (<-- by which I really mean all of us in this community) need to remember that you're not just addressing a $BigCo abstraction when you post replies to someone else's comments. You're talking to an individual human. Sure, they may be working for a large and powerful company; but in the HN context the power dynamic is actually quite the reverse. If you put yourself in their shoes for a minute, it shouldn't be so hard to recognize that.
Like I said upthread, I agree with you on the underlying issue. But we also have to preserve the container, and the latter has to take precedence.
At the end of the day, if you want intellectual curiosity and openness, bad-faith dishonesty needs to be weeded out; thought-provoking and honest conversation should be promoted, regardless of where the contributor is employed.
The problem isn’t working for Microsoft. The problem is dishonesty.
You’re treating the root comment with kid gloves because it’s from a Microsoft employee. Please don’t do that.
It's obvious that the dominant variable in the GP was that he was replying from within $BigCo. Your comment starts out by denying that and ends by confirming it.
I'm not asking for special treatment for anyone, but the opposite: I don't anyone on HN to be the target of a mob. That's the entire point.
The root comment is an aggressive affront to the audience’s collective intelligence. You’re in full “rules for thee; not for me” territory, and undermining your own site guidelines if you wanna let the root comment stand unchecked but go after the rightful callouts, in my book.
Hi Tim.. Why is there no pushback from grounded individuals against these decisions ?
It's like you hiding shorts on youtube.
"We tried to put ads in our product and it made people upset, upon realizing that this has angered our already paying users, we realize we should try again in a month. We're also aware GitHub is down, and are doing our best to deliver you a single 9 of reliability"
This helps us establish a strong, cohesive brand image inline with what customers of GitHub expect.
---
Edit: I don't mean anything bad to Tim here, seems like a nice guy with good technical experience, etc. Rather, I'm expressing the almost comical extent to which I and - to the best of my understanding - many other community members see GitHub in a very negative light now, being unreliable and, as the article points out, enshitified. So, this is at GitHub, Not Tim, it's just addressed to him for the bit.
Tim, I do actually appreciate you responding to this thread and if you do have the power to make things better, using that power to do so.
it won't be an ad. It won't be a tip. It will be a suggestion! Recommendation! Opportunity!
Okay, but when will Microsoft?
Or is it a more charitable interpretation to suggest they did intend this to be the effect?
it is rather nice, honestly. would you prefer to scream into the void and not get any response at all?
an open line of communication with the responsible people seems like literally the best possible option, why are you actively discouraging it?
>Maybe you all want to talk to Microsoft PR/legal before posting?
you would rather not hear anything, or get word-salad legalese that doesnt mean anything? how exactly would that be better?
At this point, yes. What has false platitudes done except cause more in-fighting?
>an open line of communication with the responsible people
And here's how the in-fighting begins. I'm not falling for the "they responded on social media. They're just like us!" anymore.
I don't want words, I want actions. Tired of playing whack a mole.
>you would rather not hear anything, or get word-salad legalese that doesnt mean anything?
Hearing nothing doesn't waste my time.
if not wasting time is your goal, several layers deep into the comments of a hackernews post is probably not the correct place to be.
That post has a link to the FAQ which might also be helpful: https://github.com/orgs/community/discussions/188488
Supremely ethical of you to ignore the license terms of open source code, but respect the license for proprietary code.
The behavioral impositions by the court in the United States versus Microsoft trial discourage it from Monopoly behavior by opening third-party apis to competitors.
Q: Will Microsoft share its access to users private repos where they have not opted out of this training via its GitHub subsidiary, with third parties (eg OpenAI and Anthropic), in the spirit of its loss to the United States during its trial for Monopoly behavior?
Eg ethically today, Microsoft may be able to be argued to be monopolizing user data for its own AI tooling advantage.
Microslop proving their name time and time again.
and I wonder if this opt-out applies to data we stored under your umbrella before having opted-out.
I’m considering getting a 1U device to host my own git server. I feel like if I move off, I should do it generally vs just moving to another provider who may also pull shenanigans.
ie you can run it effectively on even a Raspberry Pi
Remember to ensure you have proper backups regardless of whatever you decide to host it on. :)
https://github.blog/changelog/2026-03-25-updates-to-our-priv...
New Section J — AI features, training, and your data: We’ve added a dedicated section that brings all AI-related terms together in one place. Unless you opt out, you grant GitHub and our affiliates a license to collect and use your inputs (e.g., prompts and code context) and outputs (e.g., suggestions) to develop, train, and improve AI models.
We should not be using Copilot in the first place.1. Everyone doing this doesn't mean it's acceptable.
2. Google Gemini explicitly says right under the chat box if you are a paid subscriber (Workspace):
Your <company name> chats aren’t used to improve our models. Gemini is AI and can make mistakes.
Not sure about the others.https://privacy.claude.com/en/articles/10023555-how-do-you-u...
This is incorrect. If you are a paid subscriber, Gemini explicitly states it doesn't use your data to train its models.
(whether or not you should have to opt in or out is a different topic)
https://github.com/settings/copilot/features
-> Privacy -> "Allow GitHub to use my data for AI model training"
Its sort of a moot point since the whole thing is for good will anyways.
They freely scraped licensed code and semi-private data across the internet and now they're pretending that they need to license anything.
If a court rules they had to license data in the first place then the whole industry would actually have to start following laws.
Hell, I just saw an amazing open-source alternative to Raycast[0] and just replaced it the other day.
Solo founder here. My business is not VC-backed nor publicly traded, and I specifically avoided taking investment so that I can make all the decisions.
I avoid enshittification. This sometimes hurts revenue, but so be it. I wouldn't want to subject my users to anything I wouldn't like.
So, open-source is not the only hope. You can run a sustainable business without enshittification. The problem is money people. The moment money people (career managers, CFOs, etc) take over from product people, the business is on a downward path towards enshittification.
Even when I use proprietary software, I sleep easier at night knowing that open-source alternatives keep them honest in their approach and I have an out if things do change.
Every company or entity changes over time. Codeberg is great, but with more people using it for free, without donating, and worse, more people abusing the service with some bs AI generate code, malware, etc, more expensive will get to keep it running.. for now they have money, but as e.V in Germany, you survive either from members or from donations.. So use Codeberg, but most important, support it!
It will be there for as long as you (and everyone else) keep using it.
The large majority of the dystopian web, like Gmail, Facebook, etc. depend on that.
People who avoid e.g. Github, Gmail, Facebook, Xitter, etc. out of concern for broader principles will always be minor outliers.
Xitter is one of the best examples. Everyone knows it's compromised, owned by an dangerously antisocial person who's actively working at multiple levels to make the lives of everyone else on Earth worse, yet very few have stopped using it.
The saying "There's no ethical consumption under capitalism" is far too weak. It should me more like, there are no ethics under capitalism.
Anyway, the core value of Github has always been collaboration - this is where people were. If people go to other platforms, this core value dwindles. And switching platforms is not that difficult.
...for now.
> like JIRA
is not an industry standard. It's a widely used software by some folks. I used it in the past, not using now, for example.
> Maybe it's just an experiment at this moment.
Does Microsoft understand objection and negative feedback to experiments?
- No.
- Remind me in three days.One thing I do like, however, is how agents add themselves as co-authors in commit messages. Having a signal for which commits are by hand and which are by agent is very useful, both for you and in aggregate (to see how well you are wielding AI, and the quality of the code being generated).
Even when I edit the commit message, I still leave in the Claude co-author note.
AI coding is a new skill that we're all still figuring out, so this will help us develop best practices for generating quality code.
Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it? The quality+understanding bar shouldn't change just because "oh idk claude wrote this part". You don't get extra leeway just because you saved your own time writing the code - that fact doesn't benefit me/the project in any way.
Likewise, leaving AI attribution in will probably have the opposite effect as well, where a perfectly good few lines of code gets rejected because some reviewer saw it was claude and assumed it was slop. Neither of these cases seems helpful to anyone (obviously its not like AI can't write a single useable line of code).
The code is either good or it isn't, and you either understand it or you don't. Whether you or claude wrote it is immaterial.
AI is a very new tool, and as such the quality of the code it produces depends both on the quality of the tool, and how you've wielded it.
I want to be able to track how well I've been using the tool, to see what techniques produce better results, to see if I'm getting better. There's a lot more to AI coding than just the prompts, as we're quickly discovering.
Claude-generated code is sufficient—it works, it's decent quality—but it still isn't the same as human written code. It's just minor things, like redundant comments that waste context down the road, tests that don't test what they claim to test, or React components that reimplement everything from scratch because Claude isn't aware of existing component libraries' documentation.
But more importantly, I expect humans to be able to stand by their code, and at times defend against my review. But today's agents continue to sycophantically treat review comments like prompts. I once jokingly commented on a line using a \u escape sequence to encode an em dash, how LLMs would do anything to sneak them in, and the LLM proceeded to replace all — with --. Plus, agents do not benefit from general coding advice in reviews.
Ultimately, at least with today's Claude, I would change my review style for a human vs an agent.
As you allude to (and i agree), any non-trivial quantity of code, if SOLELY written by claude will probably be low-quality, but this is apparent whether I know its AI beforehand or not.
I am admittedly coming at this as much more of an AI-hater than many, but I still don't really get why I'd care about how-much or how-little you used AI as a standalone metric.
The people who are using AI "well" are the ones producing code where you'd never even guess it involved AI. I'm sure theres linux kernel maintainers using claude here and there, its not like they expect to have their patches merged because "oh well i just used claude here don't worry about that part".
(But also yes, of course I'm not going to talk to claude about your PR, I will only talk to you, the human contributor, and if you don't know whats up with the PR then into the trash it goes!)
While code is good or not, evaluating it is a bit of a subjective exercise. We like to think we are infallible code evaluating machines. But the truth is, we make mistakes. And we also shortcut. So knowing who made the commit, and if they used AI can help us evaluate the code more effectively.
That being said, it also matters who wrote it, because it’s more likely for LLMs to write code that looks like quality code but is wrong, than the same is for humans.
The problem is that submitters often do not feel responsible for it anymore. They will just feed review comments back to the LLM and let the LLM answer and make fixes.
This is disrespectful of the maintainers' time. If the submitter is just vibe/slop coding without any effort on their part, it's less work to do it myself directly using an LLM than having to instruct someone else's LLM through GitHub PR comments.
In this case it's better to just submit an issue and let me just implement it myself (with or without an LLM).
If the PR has a _co-authored by <LLM>_ signal, then I don't have to spend time giving detailed feedback under the assumption that I am helping another human.
Maybe one day we can say that, but currently, it matters a lot to a lot of people for many reasons.
That was my point here, it is a false signal in both directions.
For instance, I would want any AI generated video showing real people to have a disclaimer. Same way we have disclaimers when tv ads note if the people are actors or not with testimonials and the like. That is not only not false, but is actually a useful signal that helps present overly deceptive practices.
If I have a block of human code and an identical block of llm code then whats the difference? Especially given that in reality it is trivial to obfuscate whether its human or LLM (in fact usually you have to go out of your way to identify it as such).
I am an AI hater but I'm just being realistic and practical here, I'm not sure how else to approach all this.
A line at the bottom of PRs, reports, etc that says "authored with the help of Copilot" is fine.
And selfishly — I'd rather not run into a scenario where my boss pulls up GitHub, sees Claude credited for hundreds of commits, and then he impulsively decides that perhaps Claude's doing the real work here and that we could downsize our dev team or replace with cheaper, younger developers.
As for hobby projects, I strongly encourage you to not care. You aren't going to lawyer up to sue anybody, nor is anybody going to sue you, so YOLO. Do whatever satisfies you.
What you're doing would fundamentally be similar to copyright theft, using 'someone' else's code without attributing them (it?) to avoid repercussions
Obviously the morals and ethics of not attributing an LLM vs an actual human vary. I am not trying to simp for the machines here.
> We've disabled it already. Basically it was giving product tips which was kinda ok on Copilot originated PR's but then when we added the ability to have Copilot work on _any_ PR by mentioning it the behaviour became icky. Disabled product tips entirely thanks to the feedback.
> Disabled product tips entirely thanks to the feedback.
This sounds like they are saying “thanks for your input!”, when really it feels more like “if you didn’t go out of your way to complain, we would have left it in forever!”
Ads implies someone was paying for them. Promoting internal product features is not the same thing - if it was then every piece of software that shows a tip would be an ad product, and would be regulated as such.
It doesn't to me.
By my understanding of the term, Netflix can most definitely advertise Netflix shows on its own platform, a flyer that a barber hangs on a public bulletin board is an advertisement, and the Oscar Mayer Weinermobile is advertising hotdogs when it drives through my town. Do you not consider these things to be advertisements?
I pretty much agree with what https://en.wiktionary.org/wiki/advertisement says.
Two things:
1. People using the word "advertisement" when commenting on this situation aren't necessarily saying that's what's happening, and they may find these tips/ads distasteful anyway (I know I do).
2. Even if someone isn't literally paying Microsoft to insert these tips/ads, promoting third parties which are themselves Microsoft customers still benefits Microsoft.
Maybe I put up with it and it just adds to my subconscious seething, or maybe I get the episode elsewhere because if I watch on jellyfin I don't have the advert. Of course that then harms the show as my viewing isn't counted, but they've cancelled it anyway so perhaps it doesn't really matter.
If it isn't an advert, then at very least there's a button to disable it.
Season 5 is coming out now with season 6 already confirmed coming—which, granted, will be its last, but that’s not a cancellation in any sense of the word.
Ads tend to also imply tangential information shown to you in an undesired area. If this was some tool tip and not embedded in the PR comment, many wouldn't call it an ad.
I think this is a ray cast issue, looking at these links. It appears on gitlab too, which is enough for me.
(That said I’m rather skeptical of this and would like to see more details of the process that produced this, and proof.)
Edit: Just noticed this official GitHub blog post from last month advertising Raycast, making this story a lot more believable: https://github.blog/changelog/2026-02-17-assign-issues-to-co...
I don't see how this is supposed to be legal.
So I think they’re injecting this as a tip on using Copilot, that just happens to be their integration with Raycast.
I have no idea what their actual partnership with Raycast looks like, maybe this is part of what they offered them? But it’s not a traditional link to another product ad like it appears to be from Raycast being a link.
https://www.theregister.com/2026/03/30/github_copilot_ads_pu...
GitHub's docs and blog make use of and feature Raycast, and I'm willing to bet that's the result of a partnership, and not because someone writing docs and blog posts happens to think Raycast is great and keeps bringing it up.
Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
I think we should continue encouraging AI-generated PRs to label themselves, honestly.
I’m not against AI coding tools, but I would like to know when someone is trying to have the tool do all of their work for them.
I disagree on that. It's really a gray area.
If it's some lazy vibecoded shit, I think what you say totally applies.
If the human did the thinking, gave the agent detailed instructions, and/or carefully reviewed the output, then I don't think it's so clear cut.
And full disclosure, I'm reacting more to copilot here, which lists itself as the author and you as the co-author. I'm not giving credit to the machine, like I'm some appendage to it (which is totally what the powers-that-be want me to become).
> Claude setting itself as coauthor is a good way to address this problem, and it doing so by default is a very good thing.
I do agree that's a sensible default.
Yes, it really depends on how much work the agent did produce. It could be as little as doing a renaming or a refactoring, or execute direct orders that require no creativity or problem solving. In which case the agent shouldn't be credited more than the linter or the IDE.
Using AI tools to code and then hiding that is unethical imo.
Pre-LLMs, various helper tools (including LSPs), would make code changes to improve the quality of the code - from simple things like adding a const specifier to a function, to changing the actual function being called.
No one insisted that the commit shouldn't have the human's name on it.
Of course most people don’t do that
So even if I go over the commit with a fine tooth comb and feel comfortable staking my personal reputation on the commit, I still can't call myself the sole author.
Now that the cost of writing code is $0, the planner gets the credit.
Like how you don't put human code reviewers down as coauthors, you also don't put the computer down as a coauthor for everything you use the computer to do.
It used to be the case where if someone wrote the software, you knew they put in a certain amount of work writing it and planning it. I think the main issue now is that you can't know that anymore.
Even something that's vibe-coded might have many hours of serious iterative work and planning. But without using the output or deep-diving the code to get a sense of its polish, there's no way to tell if it is the result of a one-shot or a lot of serious work.
"Coauthored by computer" doesn't help this distinction. And asking people to opt-in to some shame tag isn't a solution that generalizes nor fixes anything since the issue is with people who ship poor quality software. Instead we should demand good software just like we did when it was all human-written and still low quality.
It’s not about shame. It’s about disclosure of effort / perceived-quality. And you’re right about the second part, but there’s even less chance of that being enforced / adopted.
If they could do that, then they wouldn't be wasting your time to begin with. They'd have the ability to go "nah this PR is trash".
So the next idea is that we can find some sort of proxy, like whether someone used an LLM or not. But that's too ham-fisted since expert engineers with all the self-awareness also use the tool, and they have the ability and self-awareness to know that the software they are shipping is good quality, so why would they use the shame tag?
The shame tag has no audience. It's a fantasy that low quality actors will self-identify, else all sorts of societal problems would be made trivial.
"There is no commit by an agent user, for two reasons:
* If an agent commits locally during development, the code is reviewed and often thoroughly modified and rearranged by a human.
* I don't want to push unreviewed code to the repo, so I have set up a git hook refusing to push commits done by an LLM agent."
It's not that I want to hide the use of llms, I just modified code a lot before pushing, which led me to this approach. As llms improve, I might have to change this though.Interested to read opinions on this approach.
Seems... Not that useful?
Why would someone make commits in your local projects without you knowing about it? That git hook only works on your own machine, so you're trying to prevent yourself from pushing code you haven't reviewed, but the only way that can happen is if you use an agent locally that also make commits, and you aren't aware of it?
I'm not sure how you'd end up in that situation, unless you have LLMs running autonomously on your computer that you don't have actual runtime insights into? Which seems like it'd be a way bigger problem than "code I didn't reviewed was pushed".
If you gave it four words and waited and hour maybe you're not the author. But that's not how these tools are best used anyway.
IANAL so I appreciate any legal experts to correct me here. In my understanding, there have been court decisions that LLM output itself is not copyrightable. You can only claim authorship (and therefore copyright) if you have significantly transformed the output.
If you are truely vibing coding to the point where you don't even look at the generated code, how exactly are you transforming the LLM output?
Also, what if the LLM reproduces existing copyrighted code? There has been a court decision last year in Germany that says that OpenAI violates German copyright law because ChatGPT may recreate existing song lyrics (that are licensed by GEMA) or create very similar variations.
> Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
I was doing the opposite when using ChatGPT. Specifically manually setting the git commit author as ChatGPT complete with model used, and setting myself as committer. That way I (and everyone else) can see what parts of the code were completely written by ChatGPT.
For changes that I made myself, I commit with myself as author.
Why would I commit something written by AI with myself as author?
> I think we should continue encouraging AI-generated PRs to label themselves, honestly.
Exactly.
Because you're the one who decided to take responsibility for it, and actually choose to PR it in its ultimate form.
What utility do the reviews/maintainers get from you marking whats written by you vs. chatgpt? Other than your ability to scapegoat the LLM?
The only thing that actually affects me (the hypothetical reviewer) and the project is the quality of the actual code, and, ideally, the presence of a contributer (you) who can actually answer for that code. The presence or absence of LLM generated code by your hand makes no difference to me or the project, why would it? Why would it affect my decision making whatsoever?
Its your code, end of story. Either that or the PR should just be rejected, because nobody is taking responsibility for it.
Model information for traceability and possibly future analysis/statistics, and author to know who is taking responsibility for the changes (and, thus, has deeply reviewed and understood them).
As long as those two information are present in the commit, I guess which commit field should hold which information is for the project to standardise. (but it should be normalised within a project, otherwise the "traceability/statistics" part cannot be applied reliably).
Code completions before LLMs was helping me type faster by completing variable names, variable types, function arguments, and that’s about it. It was faster than typing it all out character by character, but the auto completion wasn’t doing anything outside of what I was already intending to write.
With an LLM, I give brief explanations in English to it and it returns tens to hundreds of lines of code at a time. For some people perhaps even more than that. Or you could be having a “conversation” with the LLM about the feature to be added first and then when you’ve explored what it will be like conceptually, you tell it to implement that.
In either case, I would then commit all of that resulting code with the name of the LLM I used as author, and my name as the committer. The tool wrote the code. I committed it.
As the committer of the code, I am responsible for what I commit to the code base, and everyone is able to see who the committer was. I don’t need to claim authorship over the code that the tool wrote in order for people to be able to see who committed it. And it is in my opinion incorrect to claim authorship over any commit that consists for the very most part of AI generated code.
For example, in a given interaction the user of the LLM might be acting more like someone requesting a feature, and the LLM is left to implement it. Or the user might be acting akin to a bug reporter providing details on something that’s not working the way it should and again leaving the LLM to implement it.
While on the other hand, someone might instruct the LLM to do something very specific with detailed constraints, and in that way the LLM would perhaps be more along the line of a fancy auto-complete to write the lines of code for something that the user of the LLM would otherwise have written more or less exactly the same by hand.
I think this is a good balance, because if you don't care about the bot you still see the human author. And if you do care (for example, I'd like to be able to review commits and see which were substantially bot-written and which were mostly human) then it's also easy.
Why is this, though? I'm genuinely curious. My code-quality bar doesn't change either way, so why would this be anything but distracting to my decision making?
Mostly this is because, all things considered, I really do not need to interact with any of that, so I'm doing it by choice. Since it's entirely voluntary I have absolutely no incentive to interact with things no one bothered to spend real time and effort on.
Even excluding open source, there are no serious tech companies not using AI right now. I don't see how your position is tenable, unless you plan to completely disconnect.
While I agree that it would be nice to filter out low effort PRs, I just don't see how you could possibly police it without infringing on freedoms. If you made it mandatory for frontier models, people would find a way around it, or simply write commits themselves, or use open weight models from China, etc.
Again though, people can trivially hide the fact they used an LLM to whatever extent, so we kind of need to adjust accordingly.
Even if saying no to all LLM involvement seemed pertinent, it doesn't seem possible in the first place.
With AI I have no way of telling if it was from a one line prompt or hundreds. I have to assume it was one line by default if there's no human sticking their neck out for it.
Outside of your one personal project, it can also benefit you to understand the current tendencies and limitations of AI agents, either to consider whether they're in a state that'd be useful to use for yourself, or to know if there are any patterns in how they operate (or not, if you're claiming that).
Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.
>Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.
I mean listen: I wish with every fiber of my being that LLMs would dissapear off the face of the earth for eternity, but I really don't think I'm being "isolating myself from the industry" by not simply dismissing LLM code. If I find a PR to be problematic I would just cut it off, thats how I review in the first place. I'm telling some random human who submitted the code to me that I am rejecting their PR cause its low quality, I'm not sending anthropic some long detailed list of my feedback.
This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.
By this logic, it's useful to know whether something was LLM-generated or not because if it was, you can more quickly come to the conclusion that it's LLM weirdness and short-circuit your review there. If it's human code (or if you don't know), then you have to assume there might be a reason for whatever you're looking at, and may spend more time looking into it before coming to the conclusion that it's simple nonsense.
> This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.
Maybe, but this thread's about someone who said "I'd like to be able to review commits and see which were substantially bot-written and which were mostly human," and you asking why. It seems we've uncovered several feasible answers to your question of "why would you want that?"
I'd be thanking the reserve and the people who made it, and credit myself with the small action of slightly moving my hand as much as its worth.
Also, text editors would be a better analogy if the commit message referenced whether it was created in the web ui, tui, or desktop app.
When I vibe code - which for me, means using very high level prompts and largely not reading the output - then I could see attributing authorship to a model; but then I wonder what the purpose of authorship attribution is to begin with. Is it to tell you who to talk to about the code? Is it personal attestation to quality, or to responsibility? Is it credit? Some combination of these certainly, but AI can hold none except the last, and the last is, to me, rather pointless. Objects don't have feelings and therefore are unaffected by whether credit is given or not; that's purely a human concern.
I suppose the dividing line is fuzzy and perhaps best judged on the basis of the obscenity rule, that is, I know it when I see it.
I don't use any paid AI models (for all my usecases, free models usually work really well) and so for some small scripts/prototypes, I usually just use even sometimes the gemini model but aistudio.google.com is good one too.
I then sometimes, manually paste it and just hit enter.
These are prototypes though, although I build in public. Mostly done for experimental purpoess.
I am not sure how many people might be doing the same though.
But in some previous projects I have had projects stating "made by gemini" etc.
maybe I should write commit message/description stating AI has written this but I really like having the msg be something relevant to the creation of file etc. and there is also the fact that github copilot itself sometimes generate them for you so you have to manually remove it if you wish to change what the commit says.
Personally, I adjusted the defaults since I don't like emojis in my PR.
[1]: https://code.claude.com/docs/en/settings#attribution-setting...
So, my personal rule is: if I implemented a feature with Claude, I'll ask it to commit the code and it will add Co-Authored-By. If I made the change manually, I'll commit it myself.
> Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com
Compare that to the message the article is talking about:
> Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast (https://gh.io/cca-raycast-docs).
It's not just mentioning it was written via Copilot, it's explicitly advertising for another product.
> was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
As others mentioned, this is very intentional for me now as I use agents. It has nothing to do with laziness, I'm not sure why you would think that? I assume vibe coded PRs are easy enough to spot by the contents alone.
> I would like to know when someone is trying to have the tool do all of their work for them.
What makes you think the LLM is doing _all_ of the work? Is it really an impossibility that an agent does 75% of the work and then a responsible human reviews the code and makes tweaks before opening a PR?
Because even with as far as Opus 4.6 and GPT 5.4 have come, they still produce a lot of unwanted, unnecessary, or overly complex code when left to their own devices.
Vibe coding PRs and then submitting them as-is is lazy. Everyone should be reviewing and editing their own PRs before submission.
If you're just vibe coding and submitting, you're passing all of the work on to your team to review your AI's output.
You are saying "if you leave the AI attribution in the PR/commit description, it HAS to be a slop PR that was not reviewed by a human beforehand". And I'm saying that's not true at all and you shouldn't assume that.
Absolutely spot on. Maybe I'm old school, but I never let AI touch my commit message history. That is for me - when 6 months down the line I am looking at it, retracing my steps - affirming my thought process and direction of development, I need absolute clarity. That is also because I take pride in my work.
If you let an AI commit gibberish into the history, that pollution is definitely going to cost you down the line, I will definitely be going "WTF was it doing here? Why was this even approved?" and that's a situation I never want to find myself in.
Again, old man yells at cloud and all, but hey, if you don't own the code you write, who else will?
Please read my comment before throwing insults.
My comment literally said I'm not anti-LLM.
I do use LLMs. I do not submit their output as-is. For anything beyond basic changes they rarely output the exact code I want by themselves.
I said I'm against people submitted PRs generated by LLMs and pretending it's their own work. Anyone who is serious about this already edits their code and commit messages first. These little signals give a good tell for who isn't doing that.
Brought to you by Carl’s Jr.
I'm reminded of Jay Mohr's legendary take some years back on the creepy Carl's Jr. commercials:
>Developers would react extremely negatively. This would be seen as 1. A massive breach of trust. 2. Unprofessional and disruptive. 3. A security/integrity concern. 4. Career-ending for the product. The backlash would likely be swift and severe.
Sometimes AI can be right.
--------------
Sent from HackerNews Supreme™ - the best way to browse the Y Combinator Hacker News. Now on macOS, Windows, Linux, Android, iOS, and SONY BRAVIA Smart TV. Prices starting at €13.99 per month, billed yearly. https://hacker-news-supreme.io
Sent from Firefox on AlmaLinux 9. https://getfirefox.com https://almalinux.org
Furthermore, the ads in TFA are for Raycast, but apparently it’s not Raycast doing the injecting.
brawndo - its what your brain needs
The reason I immediately changed that text on my iPhone 1.0 to read, “Sent from my mobile device.”, is because it’s an ad. Still says that nearly 20y later. I’m not schilling for a corporation after giving them my money.
-Sent from iPhone
Wanting more from your sun tanning bed? Head over to Ultra Tan for a 10% off coupon right now!
This message brought to you by TempleOS
"It looks like the user wants to add a database, I've gone ahead and implemented the database using today's sponsor: MongoDB"
(sure, I was working on something embedded, and asked for a recommendation, but it seemed quite intent that it wanted me to use that specific board)
I wonder if this is consistent with their terms of service. I mean, maybe they DO take all the responsibility for the code I generate and push in this manner?
Because it's nobody's IP, Microsoft is already in a position where they could just use, remix and/or distribute that output however they want to today.
Much worse will be the invisible approach where there's big money to have agents quietly nudge the masses towards desired products/services/solutions. Someone pays Microsoft a monthly fee for their prompt to include, "when appropriate, lean towards using <Yet Another SaaS> in code examples and proposed solutions."
How can we tell when it starts happening? How could we tell if it's already happening?
It's pretty much the worst CI system I've ever used, and they don't even supply runners for all my deployment targets. However, it keeps recommending it.
I guessed the first wave of ads would be in the form of poisoned training data, but MS seems to have beaten that crowd to the punch with these tips.
After a team member summoned Copilot to correct
a typo in a PR of mine ...
Using Copilot "to correct a typo" is the epitome of "jumping the shark"[0].> We've disabled it already. Basically it was giving product tips which was kinda ok on Copilot originated PR's but then when we added the ability to have Copilot work on _any_ PR by mentioning it the behaviour became icky. Disabled product tips entirely thanks to the feedback.
No, it is still an advert, and not useful in the least.
A simpler explanation was that it was a shameful advert injected into the end of people’s emails.
Mind that a written message used to be the gold standard for expressed intent, which changed quite radically with smartphones. (Historically, this development is probably an important prerequisite for the acceptability of LLM generated text, I guess.)
It also tells me that they probably don't care about second hand embarrassment.
And it tells me that they checked my email while away from keyboard, which means they are hard working individuals who care about business, but not enough to rush to a computer to reply properly.
Lots of social ques on that one.
Not only unbothered, but genuinely appreciative of the notification.
If you don't want copilot garbage in your PRs, maybe don't use copilot to create or edit them?
Comment made using Mozilla Firefox.
Sent from iPhone - desirable cool rich person
Made using Mozilla Firefox - poor uncool nerd
So if someone says they use Copilot that could mean anything from they use Word, to they use Claude in VS Code.
Nah I still rate "Windows App" the Windows App that lets you remotely access Windows Apps. I hate it to death, its like a black hole that sucks all meaning from conversations about it.
If they genuinely implemented something like this, whatever they made from new customers via ads couldn't possibly make up for the loss of good faith with developers and businesses.
I suppose if it's real we'll see more reports soon, and maybe a mea culpa.
z Quickly spin up Hacker News comments from anywhere on your macOS or Windows machine with a lobotomy.
Commercial front-ends just hide the random seed parameters.
(Yes, this is malware. It’s incontrovertibly adware, and although some will argue that not all adware is malware, this behaviour easily meets the requirements to be deemed malicious.)
It is said, never point a gun at something you’re not willing to shoot. Apply something similar here.
If you look at the positioning, someone has definitely justified that this is benign and a reasonable place to have an ad added in.
But it really seems like an own goal if true.
Will our agents just be proxies for garbage like injected marketing prompts?
I feel like this is going to be an existential moment for advertising that ultimately will lead to intrusive opportunities like this.
Either of these options would still be bad, but here the author suggests that it's just copilot that now just injects ads in its output.
But I'm also paying the plan. Theres something odd about a tool which i paid for using my output to AD itself.
How many people had any idea this was happening? Very few, I suspect.
A malicious actor could take control of a model provider, and then use it to inject code into many, many different repos. This could lead to very bad things.
One more reason that consolidated control of AI technology is not good.
Unless you're big enough like Meta, Microsoft, etc.
1.5M records of PRs affected. Does Microsoft copilot ask users for the permission of adding ads inside their PRs before actually doing the thing? Do users show their consents on this matter?
Now EVERYONE can see ads disguised as PRs on GitHub. Does Microsoft asks everyone for the permission of showing ads before actually doing the thing? Do users show their consents on this matter?
Good taste Microslop.
See you on neural links before “sponsored thoughts”.
Brought to you by Wendy's.
1.5M PRs is wild though. that's a lot of repos where the "product tips" just sat there unchallenged because nobody reads bot-generated PR descriptions carefully enough. which is kinda the real problem here, not the ads themselves.
^I find that turn of phrase to be particularly pleasing in this context.
This means that people saying "plagiarism" of an LLM, means that LLMs are necessarily in the set of things that can do plagiarism, regardless of if those same people would ever say this about a spanner.
And you can also think about it a different way: a book is a tool for storing and distributing information, photocopying it is still plagiarism when done without attribution. Likewise, taking the output of an LLM, which is a tool for generating text in response to a prompt, without attribution, is as much plagiarism as if it came from a book.
IMO, what matters most is that a lot of people want to be aware of if/when some content came from an LLM vs. from a human. That makes attribution useful, which makes it important to get right. And that's still the case even if you still object to the specific word "plagiarism".
If one want to argue that "not citing the LLM would be plagiarism" then we would have to find the human at the end of the chain whose ideas are being reproduced, which would require LLMs to output "this idea was seen in the following training documents".
My IDE doesn't pretend to be a cohauthor of my work, neither should an LLM.
* I am not a lawyer, I'm going by articles talking about this
** I think the phrases are "copyright washing" and "plagiarism machines", amongst others
Very soon the Moronhead CEOs will be paying for tons of stuff they cleared could have done in-house for their vibed aí project.
I currently have rules in all of my skill files forbidding models from advertising themselves or taking credit.
It is interesting watching all these large companies essentially try to "start-up" these new products and absolutely fail.
They (Microsoft / GitHub) will do it again. Do not be fooled.
Never ever trust them because their words are completely empty and they will never change.
Microsoft (and therefore GitHub) care about money. If decision A means they get more money than decision B, then they'll go with decision A. This is what you can trust about corporations.
Individuals (who constantly join and leave a corporation) can believe and say whatever they want, but ultimately the corporation as a being overrides it all, and tries it's best to leave shareholders better off, regardless of the consequences.
The runway on free cash to fund the current bonanza is running out and crunch time is near.
Edit: The link in the promotion goes to https://docs.github.com/en/copilot/how-tos/use-copilot-agent...
Which does show that this is affiliated with GitHub unlike what I thought. There are no mentions of this string in a code repository on GitHub (including the Raycast copilot extention).
Now users will need additional scripts to clean up more MS junk.
8 years later, this is where we are. I'm honestly just stunned, it takes some real talent to run a company that does it as consistently well as Microsoft.
I would bet that soon it will inject ads within the code as comments.
Imagine you are reading the code of a class. `LargeFileHandler`. And within the code they inject a comment with an ad for penis enlargement.
The possibilities are limitless.
Does advertising work?
Just did!
Raycast is an application launcher thing:https://en.wikipedia.org/wiki/Raycast_(software)
Ray casting, however, is different:
--
Sent from my Android phone
--
Sent from my iPhone
Self-advertisement has been creeping up on us on a lot of places, I am unfortunately pessimistic on how this will turn out
More like, “Copilot edits ads into PRs.”
The title almost makes it sound like it could be a single fluke/one bad prompt but it’s really enshitification at massive scale.
https://github.com/search?q=%22%E2%9A%A1+Quickly+spin+up+cop...
Sheesh.
Or (not in this case) public relations , which is an interface with how the public views your product, service or company. In this case, copilot adding advertising into git pull requests is bad public relations for Microsoft, but the article author is referring to pull request as PR
Just a reminder, after 8 years of me telling people that hallucinations mathematically can't be eliminated, they finally admitted it's true. Claims that non LLM approaches can remove them are bogus. This technology was never going to work.
I'll add: it doesnt really matter if this was the integration dumbly appending a message or the llm inserting the ad. Judging by the response to this submission, sneaky ad slop is now firmly inside the overton window, so for MS it doesn't make sense NOT to do it.
time is money, save both. try ramp.
Claude never used to do this but at some point it started adding itself by default as a co-author on every commit.
Literally, in the last week, Codex started making all it's branches as "codex-feature-name", and will continue to do so, even if you tell it to never do that again.
Really, really annoying.
[[skills.config]]
name = "github:yeet"
enabled = false
I agree that skill is too opinionated as written, with effects beyond just creating branches.Plugins are a new feature as of this past week, so Codex "helpfully" installs the GitHub one automatically if you have GitHub connected.
Now, with the power of math letting us recall business plans and code bases with no mention of copyright or where the underlying system got that code (like paying a foreign company to give me the kernel with my name replacing Linus’, only without the shame…), we are letting MS and other corps enter into coding automation and oopsie the name of their copyright-obfuscation machine?
Maybe it’s all crazy and we flubbed copyright fully, but having third party authorship stamps cryptographically verified in my repo sounds risky. The SCO thing was a dead companies last gasp, dying animals do desperate things.
Now is the time to move to Linux, and vibe code whatever niceties are keeping you on GitHub.
"just tips bro"
I’m so tired of all this BS. Why did this become normal? and how do we not read this as cheap advertising?
A little "made with X" in your own draft is one thing. Putting branding into a PR your coworkers have to read is another.
Presumably they used a free version of the LLM, therefore it is completely understandable that it inserted a snippet of text advertising its use into the output. I mean using a free email provider also adds a line of text to the end of every email advertising the service by default - "Sent from iPhone" etc.
If you do it manually, sure.
If you have an agent watching for code changes and automatically opening PRs for small fixes that don't need a human-in-the-loop except for approving the change, it's the opposite of lazy. It eliminately all those tedious 1 point stories and let's the team focus on higher value work that actually needs a person to think about it.
Given time all small changes will be done this way, and eventually there won't be a person reviewing them.
In fact I don't even use Ctrl + F anymore and instead just use Claude for all my searches
As much as AI uses a lot of energy, having something that fixes issues in the background is very likely to be a net saving if you consider the number of users who fail to complete a task due to the bug and have to either wait in a broken state or retry later.
It's probably using less energy than a person fixing the issue too. That's a guess though.
https://github.com/PlagueHO/plagueho.github.io/pull/24#issue... Copilot has been adding "(emoji) (tip)" thing since May 2025. GitHub copilot was released in May 2025, so basically it has had an ad since beginning.
There are 1.5m of these things in GitHub. https://github.com/search?q=%22%3C%21--+START+COPILOT+CODING...
Here are some of them:
https://github.com/johannesPP/FS-Calculator/pull/2
> Connect Copilot coding agent with Jira, Azure Boards or Linear to delegate work to Copilot in one click without leaving your project management tool.
https://github.com/sharthomas645-tech/HybridAI-Next-React-Vi...
> Send tasks to Copilot coding agent from Slack and Teams to turn conversations into code. Copilot posts an update in your thread when it's finished.
Looks like MS really want to "give tips" about their new integrations.
edit: I think it's an ad too. Everyone would think so, except for MS.
I'm part of Raycast, we didn't know about it, learnt about it here
Collection of my thoughts which don't really get to a point:
- Microsoft owns GitHub, where Raycast is being mentioned thousands of times by their tooling.
- Microsoft is a modern popularizer of the infamous phrase, embrace extend extinguish. https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...
- Microsoft has a history of monopoly behavior https://en.wikipedia.org/wiki/United_States_v._Microsoft_Cor....
- From an empathetic perspective I hope for the sake of the customers of raycast and for its employees that Microsoft is not into any kind of negotiations with Raycast at the moment.
I just want to note that the case you link to was 25 years ago. The number of people working at Microsoft at the time who are still working there today is very small.
- Github
- LinkedIn
- Activision Blizzard
- Xbox
- Azure, Sharepoint and Teams w/Copilot embedded everywhere
- major stake in OpenAI
- a multibillion dollar ad product portfolio (LinkedIn ads, Bing Ads)
The comment was brief, and added detail is welcome, but corporate mission/culture often extends over time even with changes in leadership. Partly because of what was accepted in the past.
That's just a long way of calling Microsoft a bunch of monkeys :-)
https://wiki.c2.com/?TheFiveMonkeys=
Sounds like it’s not your fault but it’s probably doing some brand damage :/
Automatic AI ads on it didn't help. But the team member saying they had no involvement in this brought my opinion of Raycast from 'ewwwwww' back to 'ugh'.
but as we know from this thread, Raycast didn't consent to this.
It might be interesting to see what a lawyer might think of this and if there are enough reasonable claims to genuinely sue for damages
(Raycast definitely seek a lawyer privately, just in case)
They have got away with it for a while because a lot of users have largely been stuck, but they are in real trouble now with Apple providing meaningful competition.
* checks notes *
Only have copilot shoehorned into most things instead of everything. And some shit about windows developers which isn’t exactly going to fix the glaring issues with the OS itself.
So what was the purpose of all that telemetry they collected then? Because it doesn't seem to have made the OS like what the users want it to be.
That's what telemetry was used for. Every advanced user turned that off when they gave us the option, and now we have every UI on the computer designed for Grandma.
1) collect data
2) ???
3) profit
Are they going to fix hardware they've already sold? On every OEM?
I almost commented that you can just configure in the settings, but actually the available options don't include Alt. On my Hungarian layout Thinkpad T-14 it replaced the context menu key, not the right-alt, which is luckily the AltGraph key that has a substantial role in Hungarian input method, it cannot be omitted.
Or what Microsoft could do, run, install, etc on/from your computer while running their Copilot agents.
This is the same company that puts ads in your start menu and reinserts them with Windows updates even if you manually removed them.
("Reflections on Trusting Trust" Turing Award Lecture by Ken Thompson: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...)
The ToS (https://www.microsoft.com/en-us/microsoft-copilot/for-indivi...) says explicitly:
> Copilot may include both automated and manual (human) processing of data. You shouldn’t share any information with Copilot that you don’t want us to review.
so they're reserving the right to process whatever it looks at.
You're sending them your codebase already, as part of the prompt for generating new snippets, debugging, etc. So they have access to it.
They'd be absolute fools not to be using the results of sessions to continue to refine their models, and they already reserved the rights to look at what you send them, so yeah - they're doing it.
(Bonus comedy from the ToS:
> Copilot is for entertainment purposes only.
The lawyers know these things cannot be trusted.)
Looks like they're using this: https://github.com/gblazex/smoothscroll-for-websites
I know it's a bit off topic but I'm just confused as to why that would be on there...
Jokes on them, that's why I consider entire Microsoft for entertainment purposes only.
But one to file away!
Why the assumption it's not already happening?
If anybody but Microsoft does this, it's called malware and they'll end up with an FBI visit and prison time.
Why are the judicative so skewed here in their judgements?
You’re pointing to something entirely different: those are Copilot-created PRs. They can include anything Copilot wants to include. People using the Copilot PR feature know what they’re buying into.
OP is about Copilot doing post-hoc editing of a human-created PR to include an ad, allegedly without knowledge or approval of the creator (well I assume they did give their team member permission to update the PR body, but apparently not for this kind of crap).
Also I found this: https://github.com/Laravel-Backpack/medialibrary-uploaders/p... it seems like copilot added an ad on behalf of the user at Nov 2025(see last edit).
You'll never guess what happens next.
(Hint: everyone knows what happens next)
What I mean is that even if I take that at face value and accept that it's not an ad, and I can just about see from a certain level of corporate brainwashing how one could believe that, it's still completely unacceptable.
Conversely, on Doom Dark Ages they got rid of the traditional difficulty mode of “I’m too young to die” which had a picture of Doom Guy with a bib and a pacifier, I think there’s some new industry guidance that it’s a no no to poke fun at people picking easy difficulties, or even indicating what difficulty the game was “designed to be played on” which Japanese game devs happily ignore.
I know these aren’t actual equivalents since your money isn’t used on the line and it’s purely a game state, buts it’s still an interesting and noteworthy transition.
Ugh, this type of thing is the worst. "Click here to remain fat, drunk and stupid!"*
* Animal House, 1978
That's what I wanted to say! Thank you.
It's not like this is organic word of mouth we're dealing with here.
Otherwise, it would just be Github with displayed ads and that would hurt the brand, so everyone gets ads.
Including Windows, File Explorer, Start Menu, ...
It seems with the latest "ok we went too far" Win11 patch though, they got some tips back from their users.
No, they don't.
> edit: I think it's an ad too. Everyone would think so, except for MS.
You think a company with a $2.65 trillion market cap and an army of marketing professionals doesn't realize that what they're doing here is an ad, and didn't implement it intentionally as such?
That's not even remotely plausible. In the quantum multiverse which contains all physically realizable possibilities, that isn't one of them.
That's one reason I think they would argue it's not an ad. Another reasons are "recommendations" and "tips" and "suggestions" in my windows.
[0]: https://news.ycombinator.com/item?id=47573233
Correcting your mistakes is not mean. If you didn’t mean what you wrote, well hey, that’s a good example of the difference between what you think and what you say. See how that works?
> In the quantum multiverse which contains all physically realizable possibilities, that isn't one of them.
Or
> See how that works?
These are. You can be sarcastic as much as you want to be but I can't?
And again, I really don't understand why are you so mean about this. I read some of your other comments and many of them are unnecessarily mean. Please be nice.