I'm Venezuelan. All I can say is: I know, and that's why I dedicated ten years of my life to help Bitcoin succeed, mostly in the form of Bitcoin Cash these days. I oppose socialism and understand how Bitcoin can help prevent elitists from gaining control over the economy.
Look around, everything is a vehicle for speculation, from houses to baseball cards to oil.
Of any network that exists over the internet, Bitcoin is the least elitist. Participation is permissionless; anyone can spin up a wallet, and it requires no identification. Transactions are borderless, code is open source. No one can prevent you from sending/receiving at a protocol level.
> Bitcoin is totally elitist and mostly a vehicle for speculation.
I agree, but the parent poster does have a point. There are places where the economic situation has deteriorated to the point that using Bitcoin can solve some problems. Some things are worse than Bitcoin, but it's a rare phenomenon.
It seems to me that there is a bias for complexity among software engineers. When faced with a problem, they seem to be deliberately choosing the most complex possible solution for it.
When another programmer runs a problem and proposed solution by me, the most common answer that he gets is: "the solution to that problem can be a lot simpler". Then he thinks about it and comes up with a significantly simpler, better solution on the spot. But I notice a touch of irritation, a hint that he doesn't really want to do it the simple way. It baffles me.
This is complexity for complexity's sake. Pay no attention to the disclaimer at the start of the article. They threw every buzzword-heavy bit of tech they could find at it, creating a Frankenstein monster.
Looking at their diagrams it seems that the k8s cluster exists solely to handle their monitoring and logging needs which would be extreme overkill, especially since 18k metrics/samples and 7k logs per second are nothing. Plus you now suddenly need a S3-compatible storage backend for all your logs and metrics. Good thing Ceph comes 'free' with Proxmox, I guess.
Deploying an instance of Prometheus with *every host is also unusual, to say the least and I don't quite understand their comment to that. If you don't like a pull-based architecture (which is a valid point) why use one at all!?
There are many more push-based setups out there that are simpler to set up and less complex.
> k8s cluster exists solely to handle their monitoring and logging
Does image processing, runs our analytics, runs our Sentry, runs our gitlab-ci runners, and quite a few other things not mentioned expressly
> which would be extreme overkill
That's an interesting argument against k8s ; if anything I find it much easier to work with -- once accustomed to its idioms, ofc -- than alternatives like dedicated VMs, Docker Swarm etc
Getting HA and auto-healing for free is possible without it, of course, but does require much more work, especially if you aim for a somewhat minimised amount of statefulness (as in deviation from the template of your system)
Also S3-compatible storage backends are really aplenty, from commercial offering to simpler ones like MinIO. Ceph just happens to be a bit higher of a deployment investment with the benefit of fantastic performance, flexibility and resiliency. Somewhat like k8s itself, it's a bit daunting at first but does actually make things simpler in the long run (imo)
> 18k metrics/samples [...] are nothing
Well yes and no, the number of metrics isn't relevant per se, but its cardinality is very relevant, and managing that in a single prometheus instance will quickly require some serious vertical scaling, especially if you want to look at data on longer ranges (which, in contrary to logs, we are interested in)
> 7k logs per second [...] are nothing
That's an interesting take . Surely this isn't a world-record-shattering amount indeed, but no one seems to have such a great non-SaaS-or-cheap solution to storing, sorting and querying this amount of logs either (at the resource efficiency of Loki anyway), so maybe we just have a different set of expectations for log management
> If you don't like a pull-based architecture [...] why use one at all!? There are many more push-based setups out there that are simpler to set up and less complex.
Are there really?
That is non-SaaS and with as widespread 3rd-party software support as Prometheus does? ie great integration with essentially any database, webserver, runtime, OS, etc?
Because if we talk only about node metrics like CPU etc then yeah, sure there are plenty of options. But (maybe not so) obviously the diagram showing only node exporter doesn't mean that this is the only integration we use -- we collect prometheus metrics for MySQL, PHP-FPM, Varnish, Nginx, HAProxy, Elasticsearch, Redis, RabbitMQ etc (essentially every single piece of software we use).
Fwiw I found very little in the way of open-source solutions to that problem that ticked as many boxes as Prometheus.
As for "simpler to set up and less complex", both Cortex and Loki would be really annoying to manage outside of Kubernetes, I'll happily give you that.
But... being able to easily deploy and manage such systems once you have Kubernetes is precisely one of the reasons to use it. You can't say it's complex to deploy itself but then ignore the fact that it largely outweighs this by making reliable operation of complex-but-powerful software on top it, that is precisely one of the upsides of using it in the first place :)
Thank you for your reply and clarification. This is quite an interesting topic for me as I've tested and implemented similar setups.
> Does image processing, runs our analytics, [...]
Fair enough, I was strictly going by the diagrams.
From my experience with a somewhat similar setup (HA Loki, HA Prom + Thanos with a MinIO storage backend using Terraform + Ansible and docker) I have to say that the most complex and frustrating part was configuring Loki (this was way before they expanded their documentation, which still isn't great). I'd imagine this would be even more challenging under k8s at least if you stray from the vanilla deployment and/or charts. I agree with your statement regarding Ceph, we use it extensively in production (probably on a much bigger scale). However, I think Ceph, unlike MinIO, just adds unnecessary complexity to your setup.
> Well yes and no, the number of metrics isn't relevant per se, but its cardinality is very relevant [...]
Cardinality is something you should avoid when using Prometheus - for exactly that reason. There are, in my opinion, very few good reasons for dynamic labels (ignoring the baked-in cardinality from a setup like k8s). On first impulse I'd say you're doing metrics wrong but then again, I do not know enough about your use case. Maxing out a single Instance of Prometheus is no easy feat however, especially if your infra isn't that complex and/or big.
I've used Thanos for so long now, how does the Cortex compactor handle range queries? Does it also compact and create additional 5m & 1h resolution metrics? These might help with your larger range queries.
Just out of curiosity, have you had any look at alternatives like Victoriametrics?
> 7k logs per second [...] are nothing
My remark was just regarding the added complexity as this depends solely on the size of your log messages. If you don't need or use the (awesome!) capabilities of Loki + Grafana and just need a place for long-term storage of your logs, a 'simple' rsyslog server will do just fine.
> we collect prometheus metrics for MySQL, PHP-FPM, Varnish [...]
Many (if not all) of these can be handled by Telegraf or Fluentd plus InfluxDB (not that I'd used that myself, I absolutely love Prometheus and its Eco-system). My tongue-in-cheek comment was mostly about the Prometheus instance you deploy on every server just to scrape metrics locally and remote-write them into Cortex. Why not the more usual setup of (one or more) Prometheus instances scraping their targets and writing to Cortex?
parent reminds me of a quote from David Graeber's "Bullshit Jobs"
"Real, productive workers are relentlessly squeezed and exploited. The remainder are divided between a terrorised stratum of the, universally reviled, unemployed and a larger stratum who are basically paid to do nothing, in positions designed to make them identify with the perspectives and sensibilities of the ruling class (managers, administrators, etc.)—and particularly its financial avatars—but, at the same time, foster a simmering resentment against anyone whose work has clear and undeniable social value."
Sure, but the impact you’re having has the potential to reach many orders of magnitude fewer people. It’s a trade off, and I personally wouldn’t feel a sense of superiority.
Yes, a company will pay you to work on open source projects that are considered valuable for that company.
I am surprised that Marak hasn't been offered a satisfactory job that pays six figures already. Perhaps he makes some mistakes while applying, or luck hasn't been on his side.
That structure resembles our brain: a reactive, resilient, shifting network of connections among billions of individual human brains using smartphones and other technological devices, storing, aggregating and routing information in complex and arbitrary ways. If consciousness is to be found in our technology, I wouldn't look at the individual smartphone, but at the network of all smartphones.
100%. I disagree with the article in that there’s nothing inherently mysterious or “hard” (like the “hard” problem of consciousness) in the inner machinations of a smartphone. In the inner machinations of a mind, yes, but not in a smartphone. Everything that goes on within it is designed and controlled and can be explained by a human.
Large networks, e.g. the brain or the Internet, on the other hand, are exponentially more complex and so offer the possibility of being unexplainable when looking at certain behaviors. A recursive problem in a sense.
> Everything that goes on within it is designed and controlled and can be explained by _a_ human.
Emphasis on _a_ human.
I've heard that there's so many layers of abstraction and obfuscation, that there is no one person who can explain thoroughly every layer of a modern computer, from the volts in the bits, to the web front end, through the cloud.
The hard problem of consciousness, as philosophers of mind use the term, is not to understand how the mind works or how it produces consciousness, but rather to explain how it could possibly produce consciousness--it's a metaphysical problem.
(Physicalists such as Daniel Dennett [and myself] deny that there is such a problem--that dualists like David Chalmers are operating off of erroneous intuitions, not sound arguments.)
I largely agree with you, but answer the reducibility problem. At what point does a network become complex enough to form consciousness? I think the writer of this article likely believes that there is no such point, that all information processing is consciously experienced phenomenon on a sliding scale of complexity
I often hear AI researchers say that AI behaviours can only be explained in limited cases, not in general.
I’d be surprised if there is anything that it’s like to be a smartphone, or even an AI running in a smartphone, but I wouldn’t rule out the possibility.