If you want to provide uninterrupted service to your clients you’ll have to spend some $. You want to have redundancy, machines hosted in different different locations, backup prod servers, monitoring, analysis tools. Even if it is for 1k monthly users, if you want reliability - it will increase the costs.
In my experience, the complicated setups that are justified by the argument of "reliability" have more downtime then a single VPS. The reason is probably that there are more moving parts and more has to be maintained / can go wrong.
These days, a single VPS in the right datacenter has excellent uptime.
Agreed. I'll probably be downvoted, but these setups strike me as people who prefer to drink the cool aid than be pragmatic and only use what they need.
I've also had very high reliability rates with a single VPS. They've actually given me less downtime than AWS services at times.
At work, I aim for four nines. (We put three nines in the legal paperwork).
I can't hit four nines reliably with single VPS platforms on my typical workloads, I need load balancers and redundant app servers. I could quite likely hit three nines using single VPSes. But if a client wants 99.9% SLAs, they'll be paying for HA and I'll deploy redundant ec2 instances, multi region RDS, and an ELB. And charge them 3 or 4 times what the OP is spending for it. (And I'll almost always deliver 99.99% availability.)
For my stuff or friends or people I'm doing cost saving favours for, I'll explain how much extra it costs to guarantee less then an hour of downtime a month, the realistic expectations and historical experience of how much downtime an non-HA platform might have in their use case, and often choose along with them a single VPS (or even dirt cheap cpanel hosting) while understanding and accepting the risks associated with saving upwards of a couple of hundred bucks per month.
I think ec2 gives 99.99% availability in their SLA, no need to scale across regions or even AZs. Multi AZ RDS is 99.95%. We have a simple ELB/EC2/RDS/S3 stack on us-east-1 and need high availability for a very small amount of users and run very cheap.
A single VPS set-up might be OK for serving content over web, but in my experience, the pain begins when your software starts doing async processing - long-running cron jobs, queue processing. If you're doing it on your web server machine, there will be downtime.
I know this, because I have gone through these issues with each of my projects. Just recently an infinite loop bug in a cron job ground my "single VPS" setup to a halt (and took the web server with it).
> In my experience, the complicated setups that are justified by the argument of "reliability" have more downtime then a single VPS. The reason is probably that there are more moving parts and more has to be maintained / can go wrong.
> These days, a single VPS in the right datacenter has excellent uptime.
Again, maybe in your experience but that's not universal. There's literally no redundancy with running everything off a single VPS and if that datacenter has network or hardware problems, then your service is down.
Is redundancy necessary for the scale of OP's app considering it provides 0 income? Most likely not, but that's a decision they've decided on and there's nothing wrong with that.
What does excellent uptime mean in your book? With Digital Ocean's AM2 region I had regular downtime every few weeks and while I'm alright with it, if I had another VPS in another datacenter it would've had next to no effect on the customer experience. But an hour or more of downtime every two weeks isn't excellent.
https://aws.amazon.com/message/41926/ this lasted hours and affected almost everyone using us-east-1, large portion of internet was unavailable because they had no multi-region setups.
Two Hetzner CPX31 boxes sounds like it'd do just fine here too, providing the redundancy you mention for a fraction of the cost. Or get the boxes from different companies, for the same sort of overall price.
Yes some of the other tools could arguably be worth paying for, but if the author's concern is that he's short on money and $140 is a lot, why didn't they KISS and only use what they need? Then scale as and when needed in the future.
And $140/month is pretty good value there probably... Even if that's just being able to point potential employers/recruiters at this blog post as evidence of experience building and running an HA website with more-advanced-than-free-Google-Analytics user behaviour tracking.
If your 55k MAU want uninterrupted service, they need to be paying for it (in dollars or monetisable attention and/or privacy).
On a site currently generating zero revenue, I hope the OP is happily enough paying most of that $145/month as a learning experience or for resume bullet points (which are perfectly valid was to spend your money). They've admitted elsewhere in the comments that the two $40/month droplets are way oversized (from an attempt to solve a problem that turned out not to be droplet size/resource related) - so without redundancy and without AWS hosted metabase, this would be about $100/month less expensive to run.
I still think that's over provisioned or under engineered. Like others have commented, I'd be surprised if the features you can see on the site require any more than the $15/month the FAQ claims it costs to run, plus perhaps the $10/month Discus expenses. That seems about where a hobby/side-gig project should sit for a lot of devs before you start thinking about how to make it pay for itself... YMMV, especially if you're not comfortable earning at least junior dev salary already in some reasonably well paying part of the world.