100% true in retrospect.
This:
1. explains Brooks' assertion that adding coders to a late project makes it later
2. emphasises the importance in clearly defining interfaces between components, interfaces being the "paths of communication" between the coders of those components.
So your assertion is well founded.
Then the engine would find the best way to resolve the graph and fetch the results. You could still add your imperative logic on top of the fetched results, but you don't concern yourself with the minutiae of resilience patterns and how to traverse the dependency graph.
Commands to go specific microservices with local state persisted in a small DB; queries go to a global aggregation system.
You could also build a fancier federated querying system that combines the two, taking a Mangle query and the analyzing and rewriting it. For that you're on your own though - I prefer developers hand-crafting something that fits their needs to a big convoluted framework that tries to be all things to all people.
Then you can run A,B,C and D from a consistent snapshot of data and get correct results.
The only thing microservices allow you to do if scale stateless compute, which is (architecturally) trivial to scale without microservices.
I do not believe there has been any serious server app that has had a better solution to data consistency than SQL.
All these 'webscale' solutions I've seen basically throw out all the consistency guarantees of SQL for speed. But once you need to make sure that different pieces of data are actually consistent, then you're basicallly forced to reimplement transactions, joins, locks etc.
Microservices are just a slightly more reliable version of that, since you can hassle the author as coworker instead of via harried FCWSNEGW support mouse.
While this is true, in fact for efficiency reasons it's often better to treat even local dispatch like it's "network" -- chasing pointers and doing things one at a time in a loop is far less efficient on a modern architecture than doing things in bulk and vectorized.
Non uniform memory hierarchies, caches, branch predictors, SIMD, and now GPUs, etc. all tend to reward working with data in batches.
If I were to think of a "pure" model of computation that unified remote and local it would be to treat the entire machine in terms of the relational data model, not objects. To treat all data manipulation and decisions like a query.
And to ideally in fact have the same concept of a query optimizer / planner that a DBMS has, which is able to make decisions on how to proceed based on the cost of the storage model, the indexes, etc. because it has a bigger picture of what the programmer is trying to accomplish.
Secondly, if you are not doing event sourcing from the get go, doing distributed system is stupid beyond imagination.
When you do event sourcing, you can do CQRS and therefore have zero need for some humongous database that scales ad infinitum and costs and arm and a leg.
Generally, the microservices that I've seen work well are the type of things that you could decide to "buy" in the build vs buy debate - like you say, stuff that are either "fire and forget" or stuff where you only care about a fixed output produced, not the guts of how it's done.
Anything that depends on your core business logic within the service (if customer type X, do custom process Y) is probably not going to be a clean fit for microservices as you'd think, especially with an emergent design.
But some of that could be mitigated I guess.
This is not AI specific and nothing new and also precisely why microservices are a good solution to some problems: They reduce a teams cognitive load (if architected properly, caveats, team topologies, etc, etc)
I still thoroughly want to see capnproto or capnweb emerge the third party handoff, so we can do distributed systems where we tell microservice-b to use the results from microservice-a to run it's compute, without needing to proxy those results through ourself. Oh to dream.
I once participated in implementing a system as a monolith, and later on handled the rewrite to microservices, to 'future-proof' the system.
The nice thing is that I have the Jira tickets for both projects, and I have actual hard proof, the microservice version absolutely didn't go smoother or take less time or dev hours.
You can really match up a lot of it feature-by-feature and it'll be plainly visible that the microservice version of the feature took longer and had more bugs.
And Imo this is the best case scenario for microservices. The 'good thing' about microservices, is once you have the interfaces, you can start coding. This makes these projects look more productive at least initially.
But the issue is that, more often than not, the quality of the specs are not great to awful, I've seen projects where basically Team A and Team B coded their service against a wildly different interface, and it was only found in the final stretch that these two parts do not meet.
Multi-repo appears to make teams faster (builds are faster! fewer merge conflicts!) but, like micro-services, they push complexity into the ether. Things like updating service contracts, library updates, etc. all become more complicated.
It's not that hard to version and deploy multiple services and libraries. If you need the flexibility of that separation, it can very much be worth it.
But if you separate them and still treat them like you're in a mono whatever and you cut corners on keeping your separation clean and clear, you're going to have a bad time.
Either pattern has its advantages. It's best to remember that they're just a pattern and you should be doing one or the other for a reason.
In fact this is not even an 'architecture' but a higher level of organizational layer.