gRPC (a very Googly thing) took it all, hook line and sinker and made it URLesque
Can’t recall how the ORB overhead has been resolved in gRPCs
That being said… once you do configure it properly it can be a powerful tool. The complexity though is usually not worth it unless you’re at a certain scale.
My company run this exact code in production since it was created in 2022. We probably have several times more than 1000 rps gRPC requests running internally including over the public internet for hybrid cloud connectivity. That being said, gRPC's xDS client is not always bugs-free.
gRPC/protobuf is largely a Google cult. I've seen too projects with complex business logic simply give up and embed JSON strings inside pb. Like WTF...?
Everything was good in the begining, as long as everyone submits their .proto to a centralized repo. Once the one team starts to host their own, things get broken quickly.
As it occured to me, gRPC could optionally just serve those .proto files in the initial h2 handshake on the wire. It add just few kilobytes but solves a big problem.
Is this an issue with protobufs per se though? It's a data schema. How are people supposed to develop to a shared schema if a team doesn't - you know - share their schema? That could happen with any other particular choice for how schemas are defined.
There was a blog a few years ago, where an engineer working on the Google Cloud console was complaining that simply adding a checkbox to one of the pages required modifying ~20 internal protos and 6 months of rollout. That's an obvious downside that I wish I knew how to fix.
https://kmcd.dev/posts/protobuf-unknown-fields/ discusses the scenario you're hinting at.
It's possible in the story you mention that each of those ~20 internal protos were different messages, and each hop between backends was translating data between nearly identical schemas. In that case, they'd all need to be updated to transport that data. But that's different and the result of those engineers' choice for how to structure their service definitions.
Do you mean the reflection protocol, or some other .proto files?
EDIT: also, although the wire protocol may tolerate unknown or missing data, almost always the application doesn't.
EDIT AGAIN: I'm not saying this is how it should be just that this is the low energy state the socio-technical system seems to arrive at over time. So ideally it should be simple but due to imperfect decisions it gets horribly complicated over time.
Edited to reply to your edits: People who are just bozos with computers will never be kept from bozotry by any interchange format. If they lack any semblance of foresight then maybe they simply should get a different line of work. Postel's law is in force here. If you start sending me emails with extra headers my email program is never going to care. Protobufs are the same way.
gRPC is terrible, but ConnectRPC allows sane integration of PB with regular browser clients. Buf.build also has a lot of helpful tools, like backwards compatibility checking.
But it's not worse than other alternatives like Thrift. And waaaaaaaaaayyyyyy better than OpenAPI monstrosities.
- Configuring the python client with a json string that did not seem to have a documented schema
- Error types that were overly general in some ways and overly specific in other ways
- HAProxy couldn't easily health check the service
There were a few others that I cant remember because it was ~5 years ago. I liked the idea of the contract and protobuf seemed easy to write but had no need for client side dns load balancing and the like and was not working in GoLang.
it works really well, and the tooling is pretty good, though it isn't that widely supported yet. Rust for one doesn't have an implementation. But I've been using it at work, and we basically haven't had any issues with it (go and typescript).
But the good thing is that it can interoperate with normal grpc servers, etc. But that of course locks it into using the protobuf wireformat, which is part of the trouble ;)
0: https://connectrpc.com/
What do you mean by this? Genuinely curious, as someone who's followed that project in the past.
My goal was never to serve the community but instead leverage it to build a business. Ultimately that failed. The truth is it's very difficult to sustain open source. Go-micro was never the end goal. It was always a stepping stone to a platform e.g microservices PaaS. A lot of hard lessons learned along the way.
Now with Copilot and AI I'm able to go back and fix a lot of issues but nothing will fix trust with a community or the passage of time. People move on. It served a certain purpose at certain time.
Note: The company behind connect-rpc raised $100m but for more of a build system around protobuf as opposed to the rpc framework but this was my thinking as well. The ability to raise $10-20m would create the space to build the platform off the back of the success of the framework.
Now it's just "buf generate", every developer has the exact same settings defined in the repo and on the frontend side we are just importing the generated Typescript client and have all the types instantly available there. Also nice to have a hosted documentation to link people to.
My experience is mostly with Go, Python and TS.
I'm far from an expert, yet I came to believe that what you've described is basically "code smell". And the smell probably comes from seemingly innocuous things like enum's.
And you wondered if the solution was using Go, but no, it isn't. I was actually Go at the time myself (this was a few years ago, and I used Twirp instead of Protobuf) - but I realised that RDBMS > "Server(Go)" layer had quirks, and then the "Server(Go)" > "API(JS)" had other quirks -- and so I realised that you may as well "splat" out every attribute/relationship. Because ultimately, that's the problem...
Eg: is it a null field, or undefined, or empty, or false, or [], or {}? ...
[] == my valentines day inbox. :P