However, the restrictions for generic replay-based time-travel debugging is mostly just not using shared memory and, as a corollary, not using multiple threads in a process (multiple processes is okay). Deliberately architecting your system in the way described in the article is largely unnecessary as the overhead of these generic schemes is low, much less work, applies to most codebases that could even attempt deliberate re-architecture, and integrates well with existing tooling and visualizers.
You can even lower these restrictions further to include explicit shared memory if you record those accesses. And you can do everything if you just record all accesses. The overhead of each of these schemes increasing as the amount of recording needed to capture these forms of non-determinism increases.
I had huge success writing a trading system where everything went through the same `on_event(Inputs) -> Outputs` function of the core and a thin shell was translating everything to inputs and the outputs to actions. I actually had a handful of these components communicating via message passing.
This worked rather well as most of the input is async messages anyway, but building anything else this way feels very tiresome.
It meant that the user received meaningful short updates as things progressed, with detailed information in system logs.
This made it much easier for testers and users to report bugs and for developers to understand what to look for in logs.
I'd call this logging calls to your business logic layer. Then running the logged calls on your business logic layer in a development environment to debug the problem.
Your business logic layer should be separate from your UI/presentation layer.
Makes it easy to test them separately if they're not tightly coupled.
Also if you wanna to reuse your business logic layer in a different UI environment, it's easier to switch to another UI if they're not tightly coupled.