Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Okay, so your microservices are each very simple, but that made the interactions and resource provisioning very complex. What was the net gain?

The net gain was composability of microservices, distribution of computing resources, and the ability to marshall off implementation details. Just because those requirements were routinely ignored in the era of monoliths doesn't mean the complexity wasn't essential or didn't exist.



Before "microservices" there are services, which are also composable. And in the realm of monoliths there are also modules. Which are the key to composability.

What microservices give you is a hard boundary that you cannot cross (though you can weaken it, you cannot eliminate it) between modules. This means the internal state of a module now has to be explicitly and more deliberately exposed rather than merely bypassed by lack of access control, or someone swapping out public for private, or capitalizing a name in Go. If there's any real benefit of microservices, this is it. The hard(er) boundary between modules. But it's not novel, we've had that concept since the days of COBOL. And hardware engineers have had the concept even longer.

The challenge in monoliths is that the boundary is so easily breached, and it will be over time because of expedient choices rather than deliberate, thoughtful choices.


"The challenge in monoliths is that the boundary is so easily breached, and it will be over time because of expedient choices rather than deliberate, thoughtful choices."

I just doubt that people who don't have the discipline to write decent modular code will do any better with microservices. You will end up with a super complex, hard to change and hard to maintain system.


Over focusing on source leads to stand conclusions.

True remedy in both cases is refactoring. So if team don't have time for refactoring in monolith, then switch to microservices would need to free up enough time for the team to start doing it.

Can that even with on a level of a single team?


Exactly. 100% right.


There are tools for enforcing boundaries.

One name for this is "Modulith" where you use modules that have a clear enforced boundary. You get the same composability as micro-services without the complexity.

Here's how Spring solves it: https://www.baeldung.com/spring-modulith

It's basically a library that ensures strict boundaries. Communication has to go through an interface (similar to service api) and you are not allowed to leak internal logic such as database entities to the outer layer

If you later decide to convert the module into a separate service, you simply move the module to a new service and write a small API layer that uses the same interface. No other code changes are necessary.

This enables you to start with a single service (modulith) and split into microservices later if you see the need for it without any major refactoring


> The challenge in monoliths is that the boundary is so easily breached

The biggest challenge with monoliths is the limits of a single process and machine.


Typically the application server is stateless and any persistent state is kept in a database, so you can just spawn another instance on another machine.


Sure but there's still limits such as the binary size and working memory etc


Could you give a concrete example from your experience? I ask because in my experience, services have had a relatively small (say less than a few hundred GB) fixed working memory usage, and the rest scales with utilisation meaning it would help to spawn additional processes.

In other words, it sounds like you're speaking of a case where all services together consume terabytes of memory irrespective of utilisation, but if you can split it up into multiple heterogeneous services each will only use at most hundreds of GB. Is that correct, and if so, what sort of system would that be? I have trouble visualising it.


Let's imagine Facebook, we can partition the monolith by user, but you would need the entire code base (50+ million lines?) running in each process just in case a user wants to access that functionality. I'm not saying one can't build a billion dollar business using a monolith, but at some point the limit of what a single process can host might become a problem.


Things like Facebook and Google are at a level of scale where they need to do things entirely differently form everyone else though. e.g. for most companies, you'll get better database performance with monotonic keys so that work is usually hitting the same pages. Once you reach a certain size (which very few companies/products do), you want the opposite so that none of your nodes get too hot/become a bottleneck. Unless you're at one of the largest companies, many of the things they do are the opposite of what you should be doing.


I agree that most will be fine with a monolith, I never said anything to the contrary. But let's not pretend that the limits don't exist and don't matter. They matter to my company (and we're not Facebook or Google, but we're far older and probably have more code).


We've been here before with object oriented programming which was supposed to introduce a world of reusable, composable objects, even connected by CORBA if you wanted to run them remotely.


This is a very nice academic theory, but in real life you get half-assed, half-implemented hairballs that suck out your will to live if you have to untangle that.


What are the sequence of events that happen in real life that take us from nice theory to hairballs, that academics fail to foresee?


Most companies severely underestimate the complexities of a distributed system and the work that goes into a truly resilient, scalable setup of this kind.

An infrusctructfre of this sort is meant to solve very hard problems, not to make regular problems much harder.


There's also the distribution of work... if one set of teams manages the deployments or communications issues between the microservices, while the micro-service developers can concentrate on the domain it can be a better distribution of work. Where as if the same teams/devs are going to do both sides of this, it may make more sense for a more monolithic codebase/deployment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: