https://knowledge.workspace.google.com/admin/gemini/ai-ultra...
https://www.businessinsider.com/google-deepmind-ai-tool-divi...
They didn't. Just licenced ip and some developers.
> released antigravity
Is a crappy, half finished Windsurf fork that constantly coredumps on linux
It's been a while since I visited any google pages and I'm shocked how insipid and soulless their UX still is.
Cider was used also a lot, but I've heard even back then some folks were free to use whatever they like - vi, emacs, you name it.
I developed a fork of the IntelliJ IDE on my second week at google out of raw frustration over latency. At the time I was commuting 2-3hrs/day SF<>MTV on the gBus.
Connectivity on the bus wasn't optimal, and there was high latency. Cider didn't have deep integration, and wasn't able to let me explore and understand the internal APIs effectively. I found it easier to enter a debug session within Intellij then 'vibe' and explore the internal apis via superComplicatedObject.ini<tab>.
Faced with an alien architecture + ADHD-unfriendly flow-crushing remote desktop latency -- and the lack of discoverability, I started hacking at it and without any knowledge of the system and architecture. Just tracing Intellij execution, subprocesses and network calls.
I was able to hack together a prototype in a few days that allowed me to run IntelliJ on my Mac, while the heavy bits ran on my corp desktop. The system would mount the remote filesystem over sshfs, would monitor and patch network connections and setup transparent shim binaries. Half of Intellij was running on the Mac (the front end) and the other half ran on Linux. Intellij didn't "know" that that it was running on a mac. This was initially implemented in a ~250 line shell script that patched everything.
It was called MDProxy[1] and ended getting adopted and supported during COVID as more development went remote. This became a source of many peer bonuses and spot bonuses. circa 2017* remote coding options at the time:
typing | code
latency | integration
--------------------------
cider low | meh
mdproxy low | great
ssh+vi med | meh
rdp+iJ crushing | great
[1] https://github.com/bazelbuild/intellij/blob/6b8f03c21172033a...Yeah, I was working out of the Sydney office. Almost everything was incredibly slow due to that latency, not just chromoting but also just accessing most sites through beyondcorp.
May I ask, how are things going? Also, will your IDE always be focusing on transactional law or have you considered expanding to other legal areas and/or markets?
When this project got started, "VS code for transactional lawyers" was the target. We pretty well have that on offer at this point, but it sits in a weird spot making it harder to sell than it would be in, say, 2024. Right now, "AI forward" lawyers are spinning out of law firms in droves to start "AI native" firms backed for example by YC. They're so comfortable with Claude that they for the large part bypass a need for Tritium (or at least they think they do ;). OTOH, large law firms are inundated with legal tech products right now and have a hard time even understanding how an IDE benefits their lawyers. We're also trying to stay away from VC funding (other than from a certain awesome one ;), so we're missing a key signal for enterprise buyers. As I mentioned above, it's super hard to even set up a hands on demo because we have to get the desktop app installed on their infrastructure. But I'm shocked to learn that Googlers are happy to work in a browser, and distributing Tritium via browser is trivial, so we're going to 180 on that right here and now.
That all said, we eliminated the "free tier" as advised back in the Show HN thread, and we've managed to find a very small market in individual users. We're also finding some opportunities with the AI natives using an "unreal engine for legal tech" model that makes Tritium source available and handles the boring editor-related parts of their innovation.
I should probably do a post on this, but there's actually a topic we're working on that perhaps the HN audience will find even more interesting... coming soon!
[edit: I realized that I haven't responded to your question re: other markets, but accidentally did with the hint. We have some ideas.]
> As I mentioned above, it's super hard to even set up a hands on demo because we have to get the desktop app installed on their infrastructure. But I'm shocked to learn that Googlers are happy to work in a browser, and distributing Tritium via browser is trivial, so we're going to 180 on that right here and now.
"Trivial" in the sense you can just compile everything to WASM? I'd be curious to know what such an IDE would feel like in the browser. I think the only WASM-based GUI apps I've tried in the browser were Flutter apps and those were… weird.
> I should probably do a post on this, but there's actually a topic we're working on that perhaps the HN audience will find even more interesting... coming soon!
I'll keep an eye out for the next Show HN! :-)
Yes, that's about it. We rely on threads a lot in the desktop version which doesn't map as easily to WASM so there is still some work to do. But if you remember back to the original Show HN post, it was running in the browser there. So we have experience with it.
There is a bit of uncanny valley that comes with using WASM with <canvas> in the browser like we do rather than the DOM. There aren't reflow events in the same way, and frankly it's just a lot snappier than you expect. But it comes with a lot of trade-offs and you're forced to reinvent the wheel if you totally abandon web primitives.
Best of luck on your web-based demos! Dropping people into a working dummy environment with a few tutorial prompts should really help conversions.
That's why 80% of developers use a web based VS Code/Cursor
The article is framed around "all Googlers" but there is still a very large contingent of Googlers who cannot use these tools.
For anything with native UIs, I suppose you could "remote desktop" into an app or a simulator running in the cloud but at that point you might as well run that locally and cut out all the issues introduced by networking.
This does exist. The network isn't the main problem. The Emulator has to run under nested KVM. That + graphics rendering on the CPU makes it not so responsive. It's useable enough in many cases though.
I can run an Android app on my phone and have it pop up in Android Studio. I don't see a reason you couldn't do this with a remote simulator or even a remote physical phone.
Most companies and projects have orders of magnitude less code, and don't restrict where that code can be stored. It's interesting to learn about Cider and the other things Google built to address their unusual situation, but it's worth keeping in mind that their approach probably isn't ideal in ~most modern dev scenarios.
> There was a policy that forbade having code from this monorepo on your laptop.
Was this due to security and/or technical reasons?The aspect I miss is the distributed compilation hinted at in the article. I remember back at the end of 1990s using distcc and things, but that never seemed to happen in the Java world and the tooling like maven etc is structured to make everything one long dependent chain. Shame.
Our bazel system is full of custom skylark code so understanding the build means effectively reading a bunch of ad-hoc code written with varying degrees of competence and with confusing dependencies. I’m kinda ashamed I don’t have a deep understanding of a tool I use daily - but every time I try reading the documentation I quickly give up.
The second thing is distributed caching. Done right, not only are your test results cached, but CI's test results can be cached too.
The third thing is distributed builds. This only starts to matter in big projects, but compilation is inherently a spiky load and if you can share a big pool of compute between a big pool of engineers, you get higher hardware utilization and lower latency to build artifacts.
The fourth thing, something that isn't really feasible outside big tech, is you could be bazel all the way down in a big monorepo. One of the niftiest things at Google is to be able to put a printf inside a database server and run your client test, and blaze knows that it needs to rebuild the database server and it will do it automatically, so that you can get extra insight at almost any level in the stack.
"Hey, where's your tool's code in $MONOREPO?" "<path/to/stuff>"
Cool:
g4d my-citc-client # moral equivalent to `cd ~/repos/stuff`
blaze run path/to/stuff:target
... and you get a running version of whatever $stuff is, immediately built from head, quickly - no matter the set of dependencies, or which language they were built in. I can just try your thing out immediately with a common interface for all the builds, and I don't need to understand the build at all, unless or until I do, and then OK, absolutely every single build is always expressed in exactly the same way, same idioms, same patterns...I know how to _use_ bazel effectively to do my work. I'm comfortable with its well-designed surface but whenever I've tried to understand the inner machinery I've given up - especially when presented with a bunch of custom skylark rules code.
It's like an anti-git in some regards - the surface of git (the CLI) is an abomination in many ways but the the mechanics of the tool are so ingrained and the model is so clear and simple - I never feel uncomfortable.
I've a need to have some comprehension of the inner machinery or the underlying model of my tools.
Google’s dinky browser based Cider was cute but Facebook in its transition from Atom to VS Code was far ahead. Google might have invented asynchronous web based code review with Mondrian and Critique, but Facebook’s Diff was better with its stacked diff support. Google’s Buganizer was outdated and clunky compared to Facebook’s Tasks.
I left Facebook the year after but I do wonder where Meta’s tooling is up to nowadays. Is it still a glimpse of the future?
Regular engineers could use stacked diffs proficiently and regularly, without it being seen as a super advanced 10x engineer power user thing.
(I didn't get to use it much because I worked on embedded stuff that was on the Chromium stack and in git, not in Google3)
Buganizer (v1 and v2) was delightfully primitive and simple. That was the point. PMs couldn't play games with it.
When Google wanted engineers to use AI features, it turned them on in Cider-V by default. And if you turned them off, later updates would turn them back on. This is very good for your adoption metrics, but might not tell you exactly what you want to know about engineer happiness.
Such a dominant IDE also allows management to ignore the long-tail of users who aren't using it.
I once worked at a place where VPs were looking at sprint burndown charts, and asked what happened if the line didn't look a lot like the line expected by JIRA. The telemetry is therefore often a curse, as any metric becomes a target. How many companies today have KPIs about having automated code reviews, which are then ignored by the devs, because said reviews are just wrong on almost everything?
The learnings of Seeing Like A State don't apply just to governments.
But the downside is that you do get the Cider team constantly messaging and asking for reasons you won't switch. I gave feedback that their Vim bindings were broken (it would sometimes fail on holding down directional hjkl for no reason) but I'm not sure if they've fixed it since I left in 2023.
Cider is good for writing g3docs though.
I do think Google will continue to get results out of their tooling, as long as they are investing in the tooling. But that is not zero cost. Is it worth it for what they are doing? Largely seems to be.
But it isn't like they are that much more successful at software projects than any other company? They are still largely an ads company, no?
They have a ton of other software in 2026. And they have a pretty diverse (and diversifying) income stream today. Like 30-40% from non-ads.
Is it worth it? That’s for them to say, but they can ramp up cloud services at scale pretty fast as a core competency.
So, sure, lots of spots for software there. But still nothing that would make me think of them as a software company. Or, worse, a lot of software that I don't have a strongly favorable view on. :D
Sure, the money is mostly in ads, but serving searches, AI, youtube, and all the rest at the scale Google does it requires a technical tour-de-force. Does Google do it better than everyone? Absolutely not. But it does it better than many.
Certainly it isn't the _only_ way to do it--other companies also manage to do it. But not all that many at the same scale. It's an existence proof that you can.
Consider that they spend more on trying to build up and support this central IDE than most companies dream of losing in productivity to not having this.
Meta on the other hand, really just has ads.
I re-read this several times trying to figure out where the irony was hidden. But... it's not there?
So, again, are they that much more successful at software than other companies? They have more hilarious flops than any other company.
Don't get me wrong. I still use some of the stuff. I don't hate them. I don't even think they are particularly bad at things. I just don't think they are any more successful than other software companies. Specifically at the software side of it.
https://www.businessinsider.com/sundar-pichai-wants-to-build...
And yeah, they did/do a lot through acquistions, but seems like most major companies screw up acquistions. Google has it's fair share of failed acquistions, but especially in the earlier half of the company's lifespan, they really did some great one: Youtube, Google docs, Nest...
maybe am biased, but have always thought Google in general does do it better than most tech companies. think it's their focus on the love of interesting ideas vs the love of money (although, that changes more and more as the company ages)
AI is an odd example. For one, a lot of the research there is from acquisitions. Somewhat feeding back to my first point. They also were seen as tripping up on a lot of the current AI race, no?
But even though their AI models aren’t the absolute leaders in every field, all their models are near the top, across the board. Yeah, their recognition of this current dominant trend before any other major company has given them a big advantage in the number of fields they’ve applied AI to. For example, by putting their full weight behind DeepMind early on, they had a bunch of models before anyone else dealing with topics from protein folding to playing games. Think for them, this might be the right strategy. Explore as much in AI as you can, and figure out the ways it is truly revolutionary. Don’t focus so much on creating products that will make money today or even in near future. Take the long view… hmm, actually, a good example of this is Waymo, it seemed stalled out a few years ago, but is the clearly the best self-driving cars currently out there and finally growing market share.
Also, it was their researchers who kicked off the LLM race with their seminal paper on transformers in 2017 (yeah, they should have released an LLM first, but think they have made up for it since then).
Yeah, am trying not to be overly enthusiastic, but still, despite a couple of big mistakes in AI, they seem to have made mostly correct calls for the past ~10 years. It’s an impressive track record at least to me.
First, that's just not true. Their biggest products by revenue (search/adwords) and biggest stock value driver (AI/Gemini/Datacenters) are clearly in-house creations.
But even then, the two biggest "acquisitions" you're probably thinking of are YouTube and Android, acquired in 2006 and 2005 respectively. What fraction of the software base of those products do you think has survived the intervening two decades? To be blunt: most of the software being shipped out of those groups is being authored by engineers who couldn't even read when the ancestral code existed outside of Google.
Honestly the "acquisition" thing is just a cope meme promulgated by Apple stans, as it were. It's not a serious point.
Do these also take a lot of effort to keep going? Absolutely! But that doesn't change that they acquire a ton. They just acquired Wiz this year.
I do question a lot of the focus on a unified IDE when it comes to this strategy. It is not surprising that there is a specific "discontinued google acquisitions" page in wikipedia with that in mind.
It's user-facing stuff may or may not be great--and the consumer level flops are legendary--but that is only the tip of the software iceberg.
"Ambitious" engineering means something very different inside of Google. Example: Spanner. Infra Spanner is correctly described as a "generational achievement". Very few people outside of Google have any idea that it exists, or what it does, and that's fine.
[1] Piper: https://en.wikipedia.org/wiki/Piper_(source_control_system)
[2] Crituque: https://books.google.com/books?id=V3TTDwAAQBAJ&pg=PA399#v=on...
[3] Monorepo: https://dl.acm.org/doi/10.1145/2854146
Current Issues
* It is still buggy. They are fixing it fast, but not as seamless as VS Code. Extensions support is not good. * Harness while it is good, is not on par with others. Harness makes all the difference * Gap between Gemini models and others. Hopefully they catch up soon (IO-2026?)|
If you use Antigravity, what needs improvement to become mainstream?
I'd like to hear the perspective of the developer/user; the IDE provider has some incentive to take credit and imply high utilization reflects success rather than Google policy.
I'm interested in how tooling conditions developer expectations more broadly. I'd love to see a comparison of Linux OS development (all local+open+git, open but contributor hierarchy) vs Google (monorepo+required tooling, pre-allocated authority) from someone who's done both.
So I know what others spend and were spendingin similar environments in terms of actual dollars, and where it roughly goes.
So let me say - it was not a small investment, in part because the all-in costs of engineers are very different. I'm really unsure why you would think otherwise.
Unlike others, Google is also remarkably good at quantifying the actual value something provides in developer productivity/etc. Most engineers handwave this tremendously. Google has an amazing amount of telemetry. So i laugh when you talk about "the leverage over developer productivity" because the vast majority of companies i've worked at or talked with have almost no useful idea about their developer productivity (IE can't even account for the majority of their developers time at work), or how to invest effectively to do something about it. They can often account for <30% of time developers are spending at work, etc.
As for perspectives - there is plenty of sentinment and other data. Cider is overall one of the top 5 most loved tools at Google, and had well over 90% developer satisfaction IIRC.
Don't worry -- I came to love Cider for the simplicity. I tolerate Cider V, but its "anything" nature means it's not good at anything in particular. These days, I mostly use it to peek into what (Antigravity's internal equivalent) does.
I was in the Eclipse camp, prior to the IntelliJ reversal. At the time there were at least double the number of active daily users of Eclipse, Google had hired some original Eclipse devs who did an awesome job making Eclipse work at Google scale, and basically I was back to where I had been (in productivity) before joining Google.
The decision was made to go with Eclipse. Then it magically went into some sort of internal box/decision process, and came out IntelliJ instead. I've always thought this was because of a sufficiently highly placed Android person with a personal preference, but I could be 100% wrong.
This made me sad. I escalated internally, compiled all of the usage numbers, did feature comparisons on what actually worked in each IDE, to no avail. Near the end, Eclipse's C++ support and refactoring actually worked reasonably well on Blobstore, which was NOT a small thing.
IMO IntelliJ never worked very well in google3, and certainly didn't have anywhere near the level of fluidity and speed that Eclipse had (all the way back from its VisualAge Smalltalk roots -- something even most users of Eclipse never really understood or got into). That said, Eclipse just had the wrong architecture for a massive monorepo. It could be made to work (and it was), but it was never a good fit...and getting the upstream changes needed was apparently problematic.
Plain simple Cider was better (in my mind) than IntelliJ's broken functionality that worked in the outside world, but not in google3 (at least not on the code bases that I worked in).
Plain old Cider just kept adding smart features that solved problems and made it nicer. By the time Cider V was coming, it had big shoes to fill.
I imagine a lot of it came from that push to "use outside world tools more rather than writing our own" which is great in theory, but really felt like a huge leap backwards in terms of convergence.
The days of using Eclipse were particularly bleak. These days I use Antigravity for the overwhelming majority of my work.
They subsequently shuttered Atlanta and it would take five or more years before they'd allow engineers there again.
It was very Google. Lost some truly talented (Hi Bruce!) software engineers who would go on to make terrific software elsewhere.
Very handy for seeing a problem, quickly solving it (sending out a CL) marking it autosubmit and just moving on.
I was the eng manager for that for a bit, added some APIs to use to do code reviews inside of Eclipse or IntelliJ. That idea never took on, but when when I showed it to the code search team in Munich, they loved it.
Critique was a fast follow.
As the team had to collaborate with the VSCode team, we got clearance for sharing information about it. The screenshots in the article were posted publicly on GitHub (in vscode issues). You can also find screenshots in https://research.google/blog/smart-paste-for-context-aware-a...
More generally, a lot has been communicated on developer infrastructure at Google.
I think many VSCode users are not familiar with the Comments UI, but it's used in e.g. the "GitHub Pull Requests" extension. Apart from that, some changes in the list of directories/files (for performance reasons) and a redesigned SCM integration.
I don't know which team that was, but to add to that, official support for IntelliJ at Google started quite a bit earlier. I was the second person to join a team writing IntelliJ plugins. We wrote a Blaze plugin not too long after Blaze launched, as it was becoming more popular.
Google tells me that Blaze launched in 2006, so I think it must have been 2007 or 2008.
You are talking, i believe, about the support for blaze builds in intellij, which was fairly early on, as you point out.
I suspect Laurent is remembering some of the google3 mobile/android efforts, which were much later.
This is just on the "java" side, too. There were other plugins being built that were fairly specific to google3 support.
Blaze was started late 2005 or early 2006. Eclipse+IntelliJ was also at that time.
The IntelliJ blaze plugin was already started and out when I joined in 2007. My first job was to keep it from being rewritten yet another time, get teams to use it, and also keep it from being cancelled.
I eventually handed it over to JetBrains and I think it ships by default with IntelliJ now.
There is a similar internal product but the agentic part is shared between that and Cider.
You have access to an extremely powerful remote workstation that from a UI perspective functions almost identically to a local workstation, via Chrome Remote Desktop. Plus, no one builds things locally, even on that machine. There is a huge, absolutely amazing distributed build system that everyone uses for everything. (Again, Android and Chromium are different.)
So you don't really need a powerful local machine. I held out for a long time--there were a lot of growing pains in the early days. But eventually it got really, really good.
Size has nothing to do with it.
How is this enforced?
If you need to do development locally, you are either doing something very wrong or extremely specialized.
So there is effectively no motivation to copy the sources over. And because everything is on this distributed file system and built from it in a very bespoke environment, I would imagine (with no inside knowledge at all), that it is easy for auditors to detect when someone starts copying things out.
One is a framework called Wiz, which renders the frontend for a bunch of Google web apps. You can imagine that the Wiz team might want to refactor an API, but not have to worry about different apps using different versions. In a monorepo, they can just find all the callsites and update them in the same commit that makes the API change. There's no package.json in google3 - everything builds from HEAD. Therefore, the commit that makes a breaking change is also the commit that fixes the would-be breakage.
This architecture evolved. Google used to use Perforce, which was a common commercial version control system before Git. Google had to figure out how to express the dependencies between packages in the monorepo (which can be in different languages with different build tools). They eventually created Bazel, which expresses those dependencies and orchestrates their build tools.
Build orchestration took a few attempts. Google3 is the third version of the monorepo, that is, the one that uses Bazel for dependency management.
When it finally failed in the most annoying way possible (the touch screen, which I do not use, started creating phantom clicks in the upper right corner of the display) I went looking for another Chromebook that was light, powerful, and well-built. Finding none, I now use MacBook Air and weep for the time I lose every time it needs an OS update.
Afterwards I was issued a 12" Pixelbook and it was surprisingly much more usable than I had expected! I could ssh into a Linux box for running builds and tests. Cider worked perfectly. It was snappy enough to serve as a thin client even on a 4K screen.
I guess maybe it was fancy back in mid 2010s, but my experience was a couple of years ago.
Sounds like all other editors were slow compared to Cider.
Basically that company (a well known social media company, not FB) tried to implement everything on their own. Infra is their own (kinda makes sense because it is so huge), IDE is their own, communication is their own (which has an interesting feature that if someone screen shares an internal doc, other people can click a link to access that doc, too, very useful).
I was very jealous about their tooling team (that's what I call real programming), but nevermind I quit after a few months due to some unrelated reason.
And an enormous set of problems that must be managed. But multirepos have their own set of issues, and which set of problems you want is highly situation dependent.
I got burnt out after a while, so that kinda wrapped up my experience working on large repos.
A mono repo doesn't necessarily mean large compile times, because it depends on the projects and their dependencies within that repo.
My recollection from 2009-2011 is that emacs and vim were the dominant editors (just as the TV show Silicon Valley depicted), and there was a decent-sized minority using Eclipse and Intellij, both of which had official support for Google tooling. The command line still largely ruled though, even though the official Google developer workstation was Goobuntu, Google-flavored Ubuntu. This reflected the overall developer population of the time.
I think Cider actually was invented a little earlier than the article describes. I have vague memories of some engineers experimenting with web-based IDEs that would integrated directly with Critique (the code-review software) as early as 2013-2014. Its use was not widespread when I left in 2014; there was still the impression that it wasn't powerful enough for daily driving.
When I came back in 2020, emacs/vim use was much lower, again probably reflecting differences in the general population of developers. Many more of the developers had been trained in the post-2010 developer ecosystem of VSCode, IntelliJ, etc, and this was reflected in tool usage at Google too. I'd say IntelliJ was the dominant IDE, with Cider a close second and Cider-V just starting to take market share. You still had to pry emacs and vim from a grizzled old veteran's hands.
By 2022 I'd transferred to an Android team, and Android Studio with Blaze was the dominant IDE, even as general IntelliJ usage in the company was falling. Cider just didn't have the same Android-specific support. Company-wide Cider-V was growing the fastest, taking market share from both IntelliJ and Cider-V.
By 2024 Cider-V was dominant and there started to be a concerted push to standardize on it, particularly since new AI agent tools were coming out and they couldn't be supported on all editors that Googlers wanted to use.
As of my departure in 2026, the company-wide push was to standardize on Antigravity [1], which, as I understand it, won a turf war within the developer tools org and got blessed as the "official" Google AI coding agent. This also has the effect of concentrating developer time dogfooding Google's external AI coding offering, which hopefully should improve its quality. There's still significant Cider-V usage, but it's dropping, and execs are pushing Antigravity hard.
I'm a UXE, so I tend to use the same tools an external developer might. But I never got the impression that Cider was a recent development.
I’m well thinking I may as well trade my brick of an m5 pro for a 13” chromebook, it’s a strange time.
Fun fact: This particular version of hg with its extensions actually originated from Meta.
Duckie does still exist, and is probably one of the most used (and useful) AI tools at Google. Yes, it's just a Gemini wrapper with access to all the internal documentation. I wasn't doing daily development when I left so I don't know if it ever got into Cider-V.
Now, ironically with so many extensions and LLM computing, users seem to forget that they chose Cider because of its lightweight.
You won’t have to spend a day fiddling with your local env. Everything just works immediately.
There are commercial alternatives like GH codespaces but not as good as Cider-V.
Fight for your autonomy as a dev, because they will always want to take it away.
Over time, engineers realize that Code Search is more important than their IDE.
Some of the most productive engineers I know at Google are proud (and adaptable) VIM users, always have been, and nobody is going to tell them they should use anything else. They're also just fine with AI tooling, and fit it right into their VIM workflows.
Ah, I feel so much better now. ;)
VSCode never made it past the first 10% of what Eclipse did (does). VSCode did succeed at being something for everybody, available everywhere.
It's also nice that it stores all my preferences in the cloud, so switching machines is seamless (helpful when my macbook broke a couple weeks ago and I had to use a loaner chromebook for a day).
It's also well integrated with google3 and codesearch, and seamlessly runs tests on remote machines with tmux integration and all.
Not all of google tooling is my favorite (like their source control), but the IDE is great.
When I first started the environment you used depended entirely on language. In the C++ and Python space, there was the vim and emacs divide. With Java it was more complicated. Some still used vim/emacs but a lot of people used Eclipse.
Now Eclipse was a real problem at Google because of the source control system. Java IDEs are primarily built to import binaries, specifically jars. In the outside world, these dependencies are managed via Ant (very early days), Maven/Gradle or the like.
At Google there's a mono-repo (Perforce/Piper) and you check out parts of it locally and rely on the rest via a network connection (to SrcFS IIRC, it's been awhile). This was neat because you could edit a file locally and the dependencies would just recompile (via Blaze).
So for Eclipse a whole lot of initialization had to be done and the IDE would fall over. A lot. It had a team of ~10 working on it at one point. Then somebody did a 20% project called magicjar. Magicjar took a Perforce client and built all the dependencies as jars that could be imported directly without parsing the entire source tree (which was usually huge). This made it possible, even preferred, to use IntelliJ, which is what I did. Magicjar was great.
Other people actually made CLion work reasonably well with C++ too. That was nice. This was a much bigger undertaking with many more corner cases just given how C++ works (ie headers and templates).
So checking out a client was relatively heavyweight, even with a minimal local tree. And, if you worked on Google3, you had to do this a lot. You might need to do a config file change. This was the real starting point for Cider because it was way nicer to do config file changes with it.
Obviously I don't know where all this went from there. VS Studio as a Cider frontend? Ok, that was news to me. Engineers being unhappy when things change and when the slightest thing works differently is the least surprising thing I've ever heard.
Oh it's worth adding that in my time many people didn't use Perforce (P4) directly. They used somebody else's project, which was a Git frontend for it, called Git5. I believe it was already being deprecated while I was still there. But Git5 modelled a P4 change as a branch so you could play around with your Git commits locally and then squash them into a single P4 change. I actually liked this a lot.
Code references are less important inside Google editors, because we have a code viewer tool inside the web browser.
Most people read, explore, follow references, and share permalinks to the view-only tool. It’s a lot better than viewing code in GitHub. It’s super fast, is connected to language servers and can actually trace referenced, and overall has a million little features optimized for reading code.
We also have a code reviewer tool, and a separate tool to run and view CI runs.
So what’s left for the editor? Syntax highlighting?
I would tend to view code, run tests and CI, and review in separate tools specialized for their specific use case. The code editor was just a place where I would type in my changes.
I’d imagine this workflow feels weird to people who learned in one-stop-shop IntelliJ and GitHub world. But I can’t emphasize how much better these other tools were compared to GitHib. So a code editor that also lets me read, review, and test code didn’t really matter for me when I had a collection of smaller tools specialized for each individual task.
https://source.chromium.org/chromium/chromium/src/+/main:ipc...
Nit: not connected to language servers, it's connected to Kythe. LSP doesn't have the same kind of functionality.
AI has mostly changed the way I write code, I guess, so I rarely use JetBrains anymore, but a few years ago it was clearly a win to use a real IDE at least for Kotlin programming.
The history of Google's relationship to version control is even more interesting than editors - it went from CVS in 1998 to Perforce (P4) in 2000, then gcheckout and g4 in ~2006, then OverlayFS was invented in 2008, git5 came out in 2009, CitC obsoleted OverlayFS in ~2012, Piper built this all into the VCS in ~2013-2014, while I was gone from 2014-2020 apparently we got hg and jujutsu frameworks, and then when I got back in 2020 you'd just check out a .blazeproject from your IDE and everything would magically work. Many of these started as 20% projects (I used to have lunch with the guy who invented OverlayFS; interesting character and one of the best programmers I knew) and then got folded into the "official" way of doing things once grassroot adoption showed the execs that this was how people really wanted to work.
Git5 would copy some directories but builds would still fallback to files from the monorepo if you didn't track them. It was convenient for me since I could just grep and do fuzzy matching from my editor. Now I have to do some extra work to avoid grepping the entire monorepo. LLMs sometimes still try to grep the entire repo lol.
Now, you could use a perforace, mercurial, or jj interface and it works fine.
Pair programming was very in vogue and I used to get in a little later than some which was a great excuse to just hop on someone else’s machine who’d already gone through that pain
Gold.
https://www.linkedin.com/pulse/google-fires-entire-python-te...
https://www.airs.com/blog/archives/670
https://en.wikipedia.org/wiki/Google_Kythe
etc